One of the reasons I’m still a bear on the economy is because the economists in the optimists camp are relying upon very bad economic theory. If that theory is telling them good times are ahead, that’s one of the best predictors of bad times you could have.
This isn’t because the optimists are bad economists, bad people, or any other permutation: most economists I know are good at what they do, and are very well intentioned too.
It’s just that they were taught a crock of nonsense at university, and they now build models based on a crock of nonsense that they erroneously believe to be accurate descriptions of the real world.
There are so many bits of nonsense in economic theory that it would take a book to detail them all, but common to many of them is the following dilemma:
Almost everything economists believe is possibly true at the level of an isolated individual, but almost certainly false at the level of an economy.
The most egregious example of this is one theory that even most economists are now willing to admit is false: the “Capital Assets Pricing Model”, which preached that stock market price shares accurately, that the amount of debt finance a company has doesn’t affect its value, and many other notions that have gone up in smoke during the GFC.
The CAPM is actually derived from a model of the behaviour of an isolated stock market investor. The investor has expectations about how all the shares in the market are going to perform in the future, and the ability to lend or borrow money at a “risk free” rate. She then combines a portfolio of shares that gives her the best risk-return tradeoff with this risk-free asset—borrowing money to lever her investment in the market if she’s a “risk-seeker”, and lending money if she’s conservative.
William Sharpe, the developer of the model, was then stuck with a dilemma: how to go from a model of a single, isolated investor, to one of the entire stock market? He did what so many neoclassical economists before him had done: he “assumed a miracle”. To quote Sharpe:
“In order to derive conditions for equilibrium in the capital market we invoke two assumptions:
First, we assume a common pure rate of interest, with all investors able to borrow or lend funds on equal terms.
Second, we assume homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments—the expected values, standard deviations and correlation coefficients.
Needless to say, these are highly restrictive and undoubtedly unrealistic assumptions… (Sharpe 1964, pp. 433–434)”
Sharpe thus went from a feasible theory of a single investor to a ludicrous theory of the entire market, by assuming (a) that all investors are identical (except for their attitudes toward risk) and (b) that all investors can accurately predict the future.
Though he didn’t admit point (b) in this paper, it was made explicit by one-time believers, Eugene Fama and Ken French, in a later examination of the model’s empirical failure. They noted that the mad assumptions could be why it had failed:
“The first assumption is … investors agree on the joint distribution of asset returns … And this distribution is the true one—that is, it is the distribution from which the returns we use to test the model are drawn.” Fama and French (2004, p. 26)
So for four decades, economists applied a theory of the stock market that was based on the absurd assumption that every last stock market investor is a Nostradamus: all investors agree about the future and their expectations about the future are correct.
If only this were an isolated piece of nonsense. Unfortunately, it’s indicative of a failing that is endemic to conventional “neoclassical” economic theory. They develop a model which starts from an isolated “Robinson Crusoe” individual; then when they bring in “Man Friday”, they pretend that relations between individuals don’t alter the story in any significant way.
Unfortunately, they do. So the individual parables with which economists regale us, and which make sense on an individual scale, don’t apply at the aggregate level.
Macroeconomic theory, which is the real focus of interest now that the GFC has brought “The Great Moderation” to a close, is just as bad. For decades it has been dominated by what is known as the IS-LM model, which most economists believe was developed by Keynes.
It wasn’t. Its original author was John Hicks, a conservative opponent of Keynes’s at the time, and he developed the model as a means to interpret Keynes from a neoclassical point of view. The model emasculated what was original in Keynes’s General Theory, and this bowdlerised version of Keynes was then demolished by Friedman in the 1970s to usher in the Monetarist phase.
Monetarism has come and gone, but the IS-LM model still underpins most of the models used by Treasuries and Central Banks around the world.
Which is a curious thing, since one person who argued emphatically that this model should be abandoned was… John Hicks. As so often happens in economics, a “young turk” found wisdom in his old age, and realised that his model was untenable.
Like so many models in economics, the IS-LM model starts as a model with two intersecting lines. The axes of the diagram are income (on the horizontal) and the rate of interest (on the vertical).
The IS curve, which slopes downwards, purports to show all combinations of the level of output and the rate of interest that make supply equal to demand in the goods market. The LM curve, which slopes upwards, shows all combinations of output and the rate of interest that make supply equal demand in the money market. The intersection of the two curve shows where both the goods and money market are in equilibrium.
At university, economists are taught to consider what might happen when, for instance, the actual combination of the rate of interest and the level of output are such that there is excess supply in the goods market and excess demand in the money market, and so on. The macroeconomic models they build after leaving university are then explicitly based on the IS-LM framework, with the assumption that the economy will always tend towards the point where they two curve intersect.
One issue might be obvious to astute observers: how can you model the macroeconomy without having an explicit model of the labour market? Hicks omitted this market on the basis of the neoclassical assumption that, in a 3 market world, if two of the markets are in equilibrium—money and goods—then the third also has to be in equilibrium. But this assumption—which I can criticise on its own grounds—can’t be applied when there is disequilibrium. If the economy is “on” it’s IS curve, so that the goods market is in equilibrium, but “off” its LM curve, so that the goods market is out of equilibrium, then the third “missing” market must also be in disequilibrium.
So the IS-LM model is only valid if the economy is in equilibrium—at which point, of course, there’s no role for policy: “if it ain’t in disequilibrium, don’t fix it” (economists also waffle on about “Keynesian” and “Classical” locations for the IS curve, but I’ll ignore that pseudo-debate here).
After repeated discussions with non-orthodox economists—especially Paul Davidson, the editor of the Journal of Post Keynesian Economics—John Hicks came to appreciate this point, and in 1979 explicitly rejected the model:
“I accordingly conclude that the only way in which IS-LM analysis usefully survives—as anything more than a classroom gadget, to be superseded, later on, by something better—is in application to a particular class of causal analysis, where the use of equilibrium methods… is not inappropriate… [but] When one turns to questions of policy, the use of equilibrium methods is still more suspect…” Hicks (1981, pp. 152–153)
So the father of IS-LM analysis justifiably disowned his child three decades ago—and yet the teaching of macroeconomics still centres on this model, and it forms the core of most neoclassical models of the macroeconomy.
It’s now being superseded by models that are supposed to have “good microeconomic foundations”—the so-called Dynamic Stochastic General Equilibrium (DGSE) models. The authors of these models do not know that the “good microeconomic foundations” on which they base their models have also been shown to be false!
Here the problem again relates to aggregation: the model of an isolated individual makes it easy to derive a demand curve for that individual. But when you try to derive a demand curve for a market, you have the dilemma that changing prices also changes incomes. Neoclassical economists showed that a market demand curve could wobble all over the place, even if the demand curves of every individual in it had the standard “downward sloping” shape.
This is stated emphatically and clearly in a “bible” of neoclassical economics, the Handbook of Mathematical Economics:
“market demand functions need not satisfy in any way the classical restrictions which characterize consumer demand functions…
The importance of the above results is clear: strong restrictions are needed in order to justify the hypothesis that a market demand function has the characteristics of a consumer demand function.
Only in special cases can an economy be expected to act as an ‘idealized consumer’.” Shafer and Sonnenschein (Handbook of Mathematical Economics, 1993, p. 672)
Yet DGSE models represent the household sector of the economy as a single, utility-maximising individual! Why? Because their authors believe that it’s quite OK to model the entire economy as a single individual. Why? Because the textbook from which they learnt their economics told them so. This is now one of the standard texts for Honours and PhD education in economics puts the
“Unfortunately … The aggregate demand function will in general possess no interesting properties … The neoclassical theory of the consumer places no restrictions on aggregate behaviour in general.” (Varian 1992)
But then he states that this problem can be avoided by assuming that
“all individual consumers’ indirect utility functions take the Gorman form… [where] … the marginal propensity to consume good j is independent of the level of income of any consumer and also constant across consumers… This demand function can in fact be generated by a representative consumer…” Varian (1992)
Stripped of its jargon, this says that market demand curves will slope downwards if we assume that there’s just one individual and just one commodity! Stated so baldly, no-one could take this argument seriously—let alone base models of the economy on it. But stated in the oblique and sanitised manner of an economics textbook, the absurd becomes the norm.
The education of economists at most universities (not, I am pleased to say, my own university) is therefore the farce that turns fallacy into tragedy. Fatal flaws in the theory that are evident in the original research papers of the discipline are gloss over in economic textbooks and the standard subjects based on them.
Therefore most practicing economists, who read the textbooks but not the original literature, are completely unaware of the pitiful foundations on which their carefully crafted models are built. Their assurances about the future are therefore utterly unreliable.
I’ll happily remain a bear while theories like neoclassical economics predict the imminent arrival of spring.
Fama, E. F. and French, K. R. 2004, ‘The Capital Asset Pricing Model: Theory and Evidence’, The Journal of Economic Perspectives, vol. 18, no. 3, pp 25–46.
Hicks, J. 1981, ‘IS-LM: An Explanation’, Journal of Post Keynesian Economics, vol. 3, no. 2, pp 139–154.
Shafer, W. and Sonnenschein, H. 1993, ‘Market demand and excess demand functions’, Handbook of Mathematical Economics, vol. 2, Elsevier.
Varian, H. R. 1992, Microeconomic analysis, W.W. Norton, New York.