Wednesday, September 22, 2010

A Money Mechanics Theorem...

The Theorem of Catastrophic Dis-Equilibria__As aggregate onshore wealth is de-taxed, aggregate offshore transaction wealth’s credit productivity drastically increases unimpeded, as the total aggregate onshore credit productivity is decimated__due to exchange rate, tax and tax haven systems’ necessary historical computerized mechanics of such dis-equilibria__unless government intervention be enacted by new laws, to re-balance these three systems’ unnecessary money and law sytems’ highly nefarious catastrophic mechanics…

BLACK SWANS AND KNIGHT’S EPISTEMOLOGICAL UNCERTAINTY: ARE THESE CONCEPTS ALSO UNDERLYING BEHAVIORAL AND POST WALRASIAN THEORY? By Paul Davidson*

Abstract: This note argues that Taleb’s “black Swan” argument regarding uncertainty is equivalent to Knight’s epistomological concept of uncertainty. Moreover both Behavioral economists and Post Walrasians use an epistomolical concept of uncertainty. This view differences significantly and immensely from Keynes’s idea that uncertainty is an ontological concept.

Key words:: uncertainty, risk, black swans, Taleb, Knight, Keynes, Post Walrasians, Behavorial Theory
JEL Index Classification: D80, E12, G32
----------------------------
In his excellent analysis Terzi [2010] recognizes that Taleb’s black swans are merely rare events (outliers whose probability is perhaps one in 100 or 1 in a thousand or more). The rare appearance of these black swans is already preprogrammed into nature’s ergodic plan for the economy. These black swans exist and will ultimately be seen and experienced as the first (or a unique event) although current history (going back to 1AD) may be too short to have discovered any black swans as yet. In this brief note, I wish to show that Taleb’s black swan argument is merely a new varient of Frank Knight’s concept of uncertainty [Knight, 1921]

Frank Knight, an economist in the early 20th century, was one of the first to recognize the possibility of epistemological uncertainty for certain economic processes. Knight explicitly distinguished between quantifiable risks and uncertainties. Knight wrote that__

"the practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from the statistics of past experience), while in the case of uncertainty, this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique" (Knight, 1921, p. 233).

In other words, when uncertain events occurred in Knight’s model of reality it was because past history had not turned up previous similar “black swan” events. In an ergodic universe, any single event will appear to be unique to the observer only if she does not have a sufficient a priori or statistical knowledge of reality to properly classify this event with a group of similar conditional events.

Knight (1921, p. 198) explains that uncertainty involving "unique events" occurs only when agents possess a "partial knowledge" of the cosmos or what today’s mainstream economists call “incomplete information”. Knight's reflection on the immutability of the economic cosmos is somewhat ambiguous. Knight appears to argue that as a stylized fact uncertainty is an epistemological factor in an ontological immutable reality when he wrote Knight, (1921, p. 210) that the__

"universe may not be knowable...[but] objective phenomenon [reality] ... is certainly knowable to a degree so far beyond our actual powers ... [and therefore] any limitation of knowledge due to lack of real consistency [i.e., ergodicity] in the cosmos may be ignored".

In other words, Knight suggests that any lack of knowledge about external reality that might be attributed to a lack of real consistency over time in the cosmos is insignificant and may be ignored when compared to humans's cognitive failures to identify a predetermined external (ergodic) reality of “unique” events.

Knight (1921, p. 198) suggests, rather than dogmatically claims, that it "is conceivable that all changes might take place in accordance with known laws", i.e., the future is determined by ergodic laws. Thus Knight left the theoretical door slightly ajar for his analysis to be based primarily on the concept of a predetermined immutable cosmos. The primary difference between risk and uncertainty for Knight is that uncertainty exists only because of the failure of human's "actual powers" to process the information "knowable" about the programmed economic cosmos.

Since probabilistic risks can be quantified by human computing power, Knight argued that the future is insurable against risky probabilistic occurrences. The cost of such insurance, or self-insurance, will be taken into account in all entrepreneurial marginal cost calculations (or by contingency contracts in a complete Arrow-Debreu general equilibrium system). This insurance process permits entrepreneurs to make profit-maximizing rational production and investment choices even in the short run when dealing with risky known processes.

The existence of what appears to be uncertain or "unique" events, on the other hand, arises because humans do not have sufficient cognitive powers to group correctly these uncertain outcomes by their common characteristics. Hence for Knight agents cannot capture the insurance costs of these "uncertain" events in their marginal cost computations. Isn’t that what Taleb has argued when he suggests that the “risk management” highly mathematical models developed by “quants” on Wall Street were not able to capture the insurance risk involved if a black swan in financial markets did occur?

If we accept Knight's position that we can ignore the possibility of a "lack or real consistency in the cosmos", then the objective probabilities associated with what Knight labels "uncertain" events are already programmed into the consistent cosmos. It is just that the short run does not provide a sufficiently large sample, for enough black swans to appear to calculate the probabilistic risk of encountering a black swan. In the long run, those entrepreneurs who in their price - marginal cost calculations include these insurance costs "as if" they knew the objective probabilities implicit in Knight's unchanging reality will make the efficient decision and will, in Knight's system, earn profits. These are the Darwinian entrepreneurial “agents who know how to build robustness” in the market system that are the heroes of Taleb’s (and Knight’s) view of the economy.

In essence, Knight appears to be a precursor for what Colander (2006) calls the Post Walrasian theorists, or others call Behavioral theorists , of today. These Post Walrasians or Behavioralists erect ad hoc models suggesting that agents may not always act with the economic rationality of classical theory’s decision makers because often the decision makers do not have the computational power to process sufficient information about the presumed ergodic future__

David Colander [2006, p. 2] notes that “Post Walrasians assume low-level information processing capabilities and a poor information set”. Nevertheless underlying this Post Walrasian analytical approach is the belief that the “true structure” governing the economic future is a Walrasian economic system [see Mehrling, 2006, p. 78 , Kirman, 2006, p. xx, Brock and Durlauf, 2006, p. 116]. Unfortunately, such theories have no unifying underlying general theory to explain why such “irrational” behavior exists. Behavioral theorists can not explain why those who undertake classical non-rational behavior have not been made extinct by a Darwinian struggle with those real world decision makers who take the time to act rationally or who, at least, make decisions that are consistent with those they would make “as if” they knew the underlying Walrasian system.

Had behavioral theorists, Post Walrasians and Taleb adopted Keynes’s general theory as their basic framework, irrational behavior can be explained as sensible behavior if the economy is a non-ergodic system. Or as Hicks (1977, p. vii) succinctly put it, "One must assume that the people in one's models do not know what is going to happen, and know that they do not know just what is going to happen." In conditions of true uncertainty, people often realize they just do not, and can not, possess a clue as to what rational behavior should be.