Social Cooperation Is the free market argument in need of rebranding?

The standard free market analysis places the individual at its center. As Adam Smith famously noted in 1776, although they act in their own self interest, an individual in a free market is “led by an invisible hand to promote an end which was no part of his intention.” And it is generally argued that competition is the driving force behind the benefits of the market process. Entrepreneurs constantly search for new opportunities to make profit, and as a result they find more efficient ways to provide goods to consumers.

Phrased in this way, the free market argument tends to evoke images of Social Darwinism – the best rise to the top, and the weak are left behind. Competition implies a constant struggle between market participants to seek their own benefit at the expense of others. This vision often leads critics to argue that the free market ideal generates an uncaring society. If everybody acts only in their own self interest, there is no room for cooperative behavior that is essential for human interaction. Morality, emotion, personal connections – none of it matters. The free market places “profit over people.”

This critique stems from a wildly incorrect reading of the free market argument.

Go back to Adam Smith. At the center of his work is the idea of division of labor. A market economy thrives not because individuals work in isolation. Instead, it depends entirely on the relationships between individuals, focusing each person’s talents on an activity where they possess a comparative advantage.

Perhaps the best illustration of the role of cooperation in a market economy is Leonard Read’s famous essay I, Pencil (there is also an excellent video inspired by the essay). Read points out that no individual on their own knows how to make even something as simple as a pencil. The production process requires dozens of firms and hundreds of workers each performing specialized tasks with little knowledge of the final product. There is no planner describing how to make a pencil and yet through the actions of individuals as well as the interactions between individuals, the production process arises spontaneously. An individual acting alone would quickly fail in a market economy.

Ludwig von Mises’s famous treatise Human Action, a comprehensive analysis of the working of the free market system, was almost given a different titleSocial Cooperation. Although Mises dropped this alternate title, the theme that markets depend as much on cooperation between individuals as they do on individual action itself runs throughout the book. Mises notes:

Within the frame of social cooperation there can emerge between members of society feelings of sympathy and friendship and a sense of belonging together. These feelings are the source of man’s most delightful and most sublime experiences. They are the most precious adornment of life; they lift the animal species man to the heights of a really human existence.
Human Action p. 144

A free market, in Mises’s view, doesn’t destroy relationships between individuals, but instead fosters these feelings. Even if we take the idea of “Social Darwinism” seriously, even if we admit that all individuals are driven by the desire to fight for their own survival, that doesn’t lead us to a world of selfishness (in a narrow sense) because “the most adequate means of improving his  condition is social cooperation and the division of labor” (Human Action, p. 176).

But the argument that markets and morals are inconsistent faces an even deeper flaw. In a market economy we have a choice. Of course we can choose to think only of ourselves, to put money over family, to value material goods over relationships. But that has absolutely nothing to do with the free market itself. Nothing in the market argument says that I should only care about wealth. If you want to put other priorities first, nobody in a free market has any right to stop you (the catch is that you also don’t have any right to make other people pay you).

A free market doesn’t place any moral judgement on the actions of individuals. It is perfectly consistent with both a savage society where everybody fights for their narrow self interest and ignores others as well as a responsible one where we care for our fellow humans. It is up to each of us as individuals to choose to live our lives morally (but of course, this choice is only an illusion).

What is the alternative? The only clear alternative I can see is to use the state to try to impose your morals on others. By enacting laws that force people to behave morally, maybe we can create a more caring society.

Such a system seems doomed to have the opposite effect. Let’s say you believe that redistribution of wealth is important. Poor people aren’t poor because they didn’t work hard. They just had bad luck. It’s the responsibility of the rich to help these people out. I am sympathetic to this reasoning. However, by forcing people to give up their wealth through taxation, we change the equation from one of responsibility to one of coercion. Rather than giving to the poor out of some sense of moral duty, I give because I don’t want to go to jail. Is attempting to legislate morality in this way more likely to generate a caring society or a resentful one? Respect between classes, or class warfare?

The free market argument should not marry itself to the individual. It is true that all actions must at their core come from individual decisions, but the market only works through the relationships between individuals. Human Action is only half of the story. Social Cooperation is equally important. By obscuring this fact, defenders of markets concede too much. Emphasizing efficiency and the incredible material progress society has made since adopting a market system is fine, but we can’t ignore the moral argument. Morality can’t be imposed. It has to be a choice. And only a free society offers that choice. “Liberty is an opportunity for doing good, but this is so only when it is also an opportunity for doing wrong” (Hayek, The Constitution of Liberty, p. 142).

What’s Wrong With Modern Macro? Part 9 Carrying on the Torch of the Market Socialists

Part 9 in a series of posts on modern macroeconomics. This post lays out a more philosophical critique of common macroeconomic models by drawing a parallel between the standard neoclassical model and the idea of “market socialism” developed in the early 20th century. 


If an economist had access to all of the data in the economy, macroeconomics would be easy. Given an exact knowledge of every individual’s preferences, every resource available in the economy, and every available technology that could ever be invented to turn those resources into goods, the largest problem that would remain for macroeconomics is waiting for a fast enough computer to plug all of this information into. As Hayek pointed out so brilliantly 60 years ago, the reason we need markets at all is precisely because so much of this information is unknown.

Read any macroeconomics paper written in the last 30 years, and you’d be lucky to find any acknowledgement of this crucial problem. Take, for example, Kydland and Prescott’s 1982 paper that is widely seen as the beginning of the DSGE framework. The headings in the model section are Technology, Preferences, Information Structure, and Equilibrium. Since then, almost every paper has followed a similar structure. Define the exact environment that defines a market, and then equilibrium prices and allocations simply pop out as a result.

What’s wrong with this method of doing economics? To understand the issue, we need to take a step back to an earlier debate.

Mises’s Critique of Socialism

In 1922, Ludwig von Mises published a book called Socialism that remains one of the most comprehensive and effective critiques of socialism ever written. In it, he developed his famous “calculation” argument. Importantly, Mises’s argument did not depend on morality, as he freely admitted that “all rights derive from violence” (42). Neither did his argument depend on the incentives of social planners. “Even angels,” claims Mises, “could not form a socialist community” (451). Instead, Mises makes a far more powerful argument: socialism is practically impossible.

I can’t explain the argument any better than Mises himself, so here is a quote that makes his main point

Let us try to imagine the position of a socialist community. There will be hundreds and thousands of establishments in which work is going on. A minority of these will produce goods ready for use. The majority will produce capital goods and semi-manufactures. All these establishments will be closely connected. Each commodity produced will pass through a whole series of such establishments before it is ready for consumption. Yet in the incessant press of all these processes the economic administration will have no real sense of direction. It will have no means of ascertaining whether a given piece of work is really necessary, whether labour and material are not being wasted in completing it. How would it discover which of two processes was the more satisfactory? At best, it could compare the quantity of ultimate products. But only rarely could it compare the expenditure incurred in their production. It would know exactly—or it would imagine it knew—what it wanted to produce. It ought therefore to set about obtaining the desired results with the smallest possible expenditure. But to do this it would have to be able to make calculations. And such calculations must be calculations of value. They could not be merely “technical,” they could not be calculations of the objective use-value of goods and services; this is so obvious that it needs no further demonstration
Mises (1922) – Socialism p. 120

Prices are what allow calculation in a market economy. In a socialist economy, market prices cannot exist. Socialism therefore is doomed to fail regardless of the intentions or morality of the planners. Even if they want what is best for society, they will never be able to achieve it.

The Response to Mises: Market Socialism

Although Mises argument was effective, the socialists weren’t prepared to give up so easily. Instead, a new idea was offered that attempted to provide a method of implementing socialist planning without falling into the problems of calculation outlined by Mises. Led by Oskar Lange and Abba Lerner among others, the seemingly contradictory idea of “market socialism” was born.

Lange’s argument begins by conceding that Mises was right. Calculation is impossible without prices. However, he argues there is no reason why those prices have to come from a market. Economic theory has already demonstrated the process through which efficient markets work. In particular, prices are set equal to marginal cost. In a market, this condition arises naturally from competition. In a market socialist society, it would be imposed by a planner. In Lange’s view, not only would such a rule match a market economy in terms of efficiency, but it could even offer improvements by dealing with problems like monopoly where competition is unable to drive down prices.

We are left with one final problem, which is the pricing of higher order capital goods. Lange admits that these goods pose a more difficult problem, but insists that the problem could be solved in the same way the market solves it: trial and error. In other words, just as entrepreneurs adjust prices in response to supply and demand, so could social planners. When demand is greater than supply, increase prices and when supply is greater than demand, reduce them. At worst, Lange argues, this system is at least as good as a free market and at best it is far better since “the Central Planning Board has a much wider knowledge of what is going on in the whole economic system than any private entrepreneur can ever have” (Lange (1936) – On the Economic Theory of Socialism, Part One p. 67)

The Market Socialist Misunderstanding: Why do Markets Work?

Lange’s argument should be appealing to anybody that takes standard economic theory seriously. A Walrasian General Equilibrium gives us specific conditions under which an economy operates most efficiently. We talk about whether decentralized competition can lead to these conditions, but why do we even need to bother? We know the solution, why not just jump there directly?

But by taking the model seriously, we lose sight of the process that it is trying to represent. For example, in the model the task of a firm is simple. Perfect competition has already driven prices down to their efficient level and any deviation from this price will immediately fail. As Mises emphasized, however, the market forces that bring about this price have to come from the constant searching of an entrepreneur for new profit opportunities. He concedes that “it is quite easy to postulate a socialist economic order under stationary conditions (Socialism, 163).” Conversely, the real world is characterized by constantly shifting equilibrium conditions. The market socialist answer assumes that we know the equations that characterize a market. Mises argues that these equations can only come through the market process.

Hayek also made an essential contribution to the market socialism debate through his work on the role of the price system in coordinating market activity. For Hayek, prices are a tool used to gather pieces of knowledge dispersed among millions of individuals. An entrepreneur does not need to know the relative scarcities of various goods when they attempt to choose the most efficient production process. They only need to observe the price. In this way, Lange’s pricing strategy cannot hope to replicate the process of a dynamic economy. When prices are no longer set by market participants looking to achieve the best allocation of resources they lose almost all of their information content.

The final piece missing from Lange’s analysis is perhaps also the most important: profit and loss. In Lange’s depiction of a market economy, he seems to imagine one that is already close to an equilibrium. In particular, he assumes that the production structures in place are already the most efficient. Without this assumption, there is no way to determine the marginal cost and no way to use trial and error pricing effectively. Mises and Hayek instead view an economy as constantly moving towards an equilibrium but never reaching it. Entrepreneurs constantly search for both new products to sell and new methods to produce existing products more efficiently. Good ideas are rewarded with profits and bad ones driven out by losses. Without this mechanism, what incentive is there for anybody to challenge the existing economic structure? Lange never provides an answer.

For a longer discussion on the socialism debate, see a paper I wrote here.

Repeating the Same Mistakes

So what does any of this have to do with modern macroeconomics? Look back to the example I gave at the beginning of this post of Kydland and Prescott’s famous paper. Like the market socialists, their paper begins from the premise that we know all of the relevant information about the economic environment. We know the technologies available, we know people’s preferences, and we assume that the agents in the model know these features as well. The parallels between the arguments of Lange and modern macroeconomics are perhaps most clear when we consider the discussion of the “social planner” in many macroeconomic papers. A common exercise performed in many of these studies is to compare the solution of a “planner’s problem” to that of the outcome of a decentralized competitive market. And in these setups, the planner can always do at least as good as the market and usually better, so the door is opened for policy to improve market outcomes.

But because most macro models outline the economic environment so explicitly, competition in the sense described by Mises and Hayek has once again disappeared. The economy in a neoclassical model finds itself perpetually at its equilibrium state. Prices are found such that the market clears. Profits are eliminated by competition. Everybody’s plans are fulfilled (except for some exogenous shocks). No thought is given to process that led to that state.

Is there a problem with ignoring this process? It depends on the question we want to answer. If we believe that economies are quick to adjust to a new environment, then the process of adjusting to a new equilibrium becomes trivial and we need only compare results in different equilibria. If, however, we believe that the economic environment is constantly changing, then the adjustment process becomes the primary economic problem that we want to explain. Modern macro has heavily invested in answering the former question. The latter appears to me to be far more interesting and far more relevant. The current macroeconomic toolbox offers little room to allow us to explore these dynamics.

 

 

What’s Wrong With Modern Macro? Part 8 Rational Expectations Aren't so Rational

Part 8 in a series of posts on modern macroeconomics. This post continues the criticism of the underlying assumptions of most macroeconomic models. It focuses on the assumption of rational expectations, which was described briefly in Part 2. This post will summarize many of the points I made in a longer survey paper I wrote about expectations in macroeconomics that can be found here.


What economists imagine to be rational forecasting would be considered obviously irrational by anyone in the real world who is minimally rational
Roman Frydman (2013) – Rethinking Economics p. 148

Why Rational Expectations?

Although expectations play a large role in many explanations of business cycles, inflation, and other macroeconomic data, modeling expectations has always been a challenge. Early models of expectations relied on backward looking adaptive expectations. In other words, people form their expectations by looking at past trends and extrapolating them forward. Such a process might seem plausible, but there is a substantial problem with using a purely adaptive formulation of expectations in an economic model.

For example, consider a firm that needs to choose how much of a good to produce before it learns the price. If it expects the price to be high, it will want to produce a lot, and vice versa. If we assume firms expect today’s price to be the same as tomorrow’s, they will consistently be wrong. When the price is low, they expect a low price and produce a small amount. But the low supply leads to a high price in equilibrium. A smart firm would see their errors and revise their expectations in order to profit. As Muth argued in his original defense of rational expectations, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” In equilibrium, these kinds of profit opportunities would be eliminated by intelligent entrepreneurs.

The solution proposed by Muth and popularized in macro by Lucas, was to simply assume that agents had the same model of the economy as the economist. Under rational expectations, an agent does not need to look at past data to make a forecast. Instead, their expectations are model based and forward looking. If an economist can detect consistent errors in an agent’s forecasting, rational expectations assumes that the agents themselves can also detect these errors and correct them.

What Does the Data Tell Us?

The intuitive defense of rational expectations is appealing and certainly rational expectations marked an improvement over previous methods. But if previous models of expectations gave agents too little cognitive ability, rational expectations models give them far too much. Rational expectations leaves little room for disagreement between agents. Each one needs to instantly jump to the “correct” model of the economy (which happens to correspond to the one created by the economist) and assume every other agent has made the exact same jump. As Thomas Sargent put it, rational expectations leads to a “communism of models. All agents inside the model, the econometrician, and God share the same model.”

The problem, as Mankiw et al note, is that “the data easily reject this assumption. Anyone who has looked at survey data on expectations, either those of the general public or those of professional forecasters, can attest to the fact that disagreement is substantial.” Branch (2004) and Carroll (2003) offer further evidence that heterogeneous expectations play an important role in forecasting. Another possible explanation for disagreement in forecasts is that agents have access to different information. Even if each agent knew the correct model of the economy, having access to private information could lead to a different predictions. Mordecai Kurz has argued forcefully that disagreement does not stem from private information, but rather different interpretations of the same information.

Experimental evidence also points to heterogeneous expectations. In a series of “learning to forecast” experiments, Cars Hommes and coauthors have shown that when agents are asked to forecast the price of an asset generated by an unknown model, they appear to resort to simple heuristics that often differ substantially from the rational expectations forecast. Surveying the empirical evidence surrounding the assumptions of macroeconomics as a whole, John Duffy concludes “the evidence to date suggests that human subject behavior is often at odds with the standard micro-assumptions of macroeconomic models.”

“As if” Isn’t Enough to Save Rational Expectations

In response to concerns about the assumptions of economics, Milton Friedman offered a powerful defense. He agreed that it was ridiculous to assume that agents make complex calculations when making economic decisions, but claimed that that is not at all an argument against assuming that they made decisions as if they knew this information. Famously, he gave the analogy of an expert billiard player. Nobody would ever believe that the player planned all of his shots using mathematical equations to determine the exact placement, and yet a physicist who assumed he did make those calculations could provide an excellent model of his behavior.

The same logic applies in economics. Agents who make forecasts using incorrect models will be driven out as they are outperformed by those with better models until only the rational expectations forecast remains. Except this argument only works if rational expectations is actually the best forecast. As soon as we admit that people use various models to forecast, there is no guarantee it will be. Even if some agents know the correct model, they cannot predict the path of economic variables unless they are sure that others are also using that model. Using rational expectations might be the best strategy when others are also using it, but if others are using incorrect models, it may be optimal for me to use an incorrect model as well. In game theory terms, rational expectations is a Nash Equilibrium, but not a dominant strategy (Guesnerie 2010).

Still, Friedman’s argument implies that we shouldn’t worry too much about the assumptions underlying our model as long as it provides predictions that help us understand the world. Rational expectations models also fail in this regard. Even after including many ad hoc fixes to standard models like wage and price stickiness, investment adjustment costs, inflation indexing, and additional shocks, DSGE models with rational expectations still provide weak forecasting ability, losing out to simpler reduced form vector autoregressions (more on this in future posts).

Despite these criticisms, we still need an answer to the Lucas Critique. We don’t want agents to ignore information that could help them to improve their forecasts. Rational expectations ensures they do not, but it is far too strong. Weaker assumptions on how agents use the information to learn and improve their forecasting model over time retain many of the desirable properties of rational expectations while dropping some of its less realistic ones. I will return to these formulations in a future post (but read the paper linked in the intro – and here again – if you’re interested).

 

What Does it Mean to Be Rational?

One of the most basic assumptions that underlies economic analysis is that people are rational. Given a set of possible choices, an individual in an economic model always chooses the one they think will give them the largest net benefit. This assumption has come under attack by many within the profession as well as by outsiders.

Some of these criticisms are easy to reject. For example, it is common for non-economists to make the jump from rationality to selfishness and denounce economics for assuming all humans only care about themselves. Similar criticisms point to economics for being too materialistic when real people actually value intangible goods as well. Neither of these arguments hits the mark. When an economist talks of rationality, they mean it in the broadest possible way. If donating to charity or giving gifts or going to poor countries to build houses are important for you, economics does not say you are wrong to choose those things. Morality or duty can have just as powerful an impact on your choices as material desires. Economics makes no distinction between these categories. You choose what you prefer. The reason you prefer it is irrelevant for an economist.

More thoughtful criticisms argue that even after defining preferences correctly, people sometimes make decisions that actively work against their own interests. The study of these kinds of issues with decision-making  has come to be called “behavioral economics.” One of the most prominent behavioral economists, Richard Thaler, likes to use the example of the Herzfeld Caribbean Basin Fund. Despite its NASDAQ ticker CUBA, the fund has little to do with the country Cuba. Nevertheless, when Obama announced an easing of Cuban trade restrictions in 2014, the stock price increased 70%. A more recent example occurred when investors swarmed to Nintendo stock after the release of Pokemon Go before realizing that Nintendo doesn’t even make the game.

But do these examples show that people are irrational? I don’t think so. There needs to be a distinction between being rational and being right. Making mistakes doesn’t mean that your original goal wasn’t to maximize your own utility, it just means that the path you chose was not the best way to accomplish that goal. To quote Ludwig von Mises, “human action is necessarily always rational” because “the ultimate end of action is always the satisfaction of some desires” (Human Action, p. 18). He gives the example of doctors 100 years ago trying to treat cancer. Although the methods used at that time may appear irrational by modern standards, those methods were what the doctors considered to be the best given the knowledge available at the time. Mises even applies the same logic to emotional actions. Rather than call emotionally charged actions irrational, Mises instead chooses to argue that “emotions disarrange valuations”  (p. 16). Again, an economist cannot differentiate between the reasons an individual chooses to act. Anger is as justifiable as deliberate calculation.

That is not to say that the behavioralists don’t have a point. They just chose the wrong word. People are not irrational. They just happen to be wrong quite often. The confusion is understandable, however, because rationality itself is not well defined. Under my definition, rationality only requires that agents make the best decision possible given their current knowledge. Often economic models go a step further and make specific (and sometimes very strong) assumptions about what that knowledge should include. Even in models with uncertainty, people are usually expected to know the probabilities of events occurring. In reality, people often face not only unknown events, but unknowable events that could never be predicted beforehand. In such a world, there is no reason to expect that even the most intelligent rational agent could make a decision that appears rational in a framework with complete knowledge.

Roman Frydman describes this point nicely

After uncovering massive evidence that contemporary economics’ standard of rationality fails to capture adequately how individuals actually make decisions, the only sensible conclusion to draw was that this standard was utterly wrong. Instead, behavioral economists concluded that individuals are less than fully rational or are irrational
Rethinking Expectations: The Way Forward For Macroeconomics, p. 148-149

Don’t throw out rationality. Throw out the strong knowledge assumptions. Perhaps the worst of these is the assumption of “rational expectations” in macroeconomics. I will have a post soon arguing that they are not really rational at all.

What’s Wrong With Modern Macro? Part 7 The Illusion of Microfoundations II: The Representative Agent

Part 7 in a series of posts on modern macroeconomics. Part 6 began a criticism of the form of “microfoundations” used in DSGE models. This post continues that critique by emphasizing the flaws in using a “representative agent.” This issue has been heavily scrutinized in the past and so this post primarily offers a synthesis of material found in articles from Alan Kirman and Kevin Hoover (1 and 2). 


One of the key selling points of DSGE models is that they are supposedly derived from microfoundations. Since only individuals can act, all aggregates must necessarily be the result of the interactions of these individuals. Understanding the mechanisms that lie behind the aggregates therefore seems essential to understanding the movements in the aggregates themselves. Lucas’s attack on Keynesian economics was motivated by similar logic. We can observe relationships between aggregate variables, but if we don’t understand the individual behavior that drives these relationships, how do we know if they will hold up when the environment changes?

I agree that ignoring individual behavior is an enormous problem for macroeconomics. But DSGE models do little to solve this problem.

Ideally, microfoundations would mean modeling behavior at an individual level. Each individual would then make choices based on the current economic conditions and these choices would aggregate to macroeconomic variables. Unfortunately, putting enough agents to make this exercise interesting is challenging to do in a mathematical model. As a result, “microfoundations” in a DSGE model usually means assuming that the decisions of all individuals can be summarized by the decisions of a single “representative agent.” Coordination between agents, differences in preferences or beliefs, and even the act of trading that is the hallmark of a market economy are eliminated from the discussion entirely. Although we have some vague notion that these activities are going on in the background, the workings of the model are assumed to be represented by the actions of this single agent.

So our microfoundations actually end up looking a lot closer to an analysis of aggregates than an analysis of true individual behavior. As Kevin Hoover writes, they represent “only a simulacrum of microeconomics, since no agent in the economy really faces the decision problem they represent. Seen that way, representative-agent models are macroeconomic, not microfoundational, models, although macroeconomic models that are formulated subject to an arbitrary set of criteria.” Hoover pushes back against the defense that representative agent models are merely a step towards truly microfounded models, arguing that not only would full microfoundations be infeasible but also that many economists do take the representative agent seriously on its own terms, using it both for quantitative predictions and policy advice.

But is the use of the representative agent really a problem? Even if it fails on its promise to deliver true “microfoundations,” isn’t it still an improvement over neglecting optimizing behavior entirely? Possibly, but using a representative agent offers only a superficial manifestation of individual decision-making, opening the door for misinterpretation. Using a representative agent assumes that the decisions of one agent at a macro level would be made in the same way as the decisions of millions of agents at a micro level. The theoretical basis for this assumption is weak at best.

In a survey of the representative agent approach, Alan Kirman describes many of the problems that arise when many agents are aggregated into a single representative. First, he presents the theoretical results from Sonnenschein (1972), Debreu (1974), and Mantel (1976), which show that even with strong assumptions on the behavior of individual preferences, the equilibrium that results by adding up individual behavior is not necessarily stable or unique.

The problem runs even deeper. Even if we assume aggregation results in a nice stable equilibrium, worrying results begin to arise as soon as we start to do anything with that equilibrium. One of the primary reasons for developing a model in the first place is to see how it reacts to policy changes or other shocks. Using a representative agent to conduct such an analysis implicitly assumes that the new aggregate equilibrium will still correspond to decisions of individuals. Nothing guarantees that it will. Kirman gives a simple example of a two-person economy where the representative agent’s choice makes each individual worse off. The Lucas Critique then applies here just as strongly as it does for old Keynesian models. Despite the veneer of optimization and rational choice, a representative agent model still abstracts from individual behavior in potentially harmful ways.

Of course, macroeconomists have not entirely ignored these criticisms and models with heterogeneous agents have become increasing popular in recent work. However, keeping track of more than one agents makes it nearly impossible to achieve useful mathematical results. The general process for using heterogeneous agents in a DSGE model then is to first prove that these agents can be aggregated and summarized by a set of aggregate equations. Although beginning from heterogeneity and deriving aggregation explicitly helps to ensure that the problems outlined above do not arise, it still imposes severe restrictions on the types of heterogeneity allowed. It would be an extraordinary coincidence if the restrictions that enable mathematical tractability also happen to be the ones relevant for understanding reality.

We are left with two choices. Drop microfoundations or drop DSGE. The current DSGE framework only offers an illusion of microfoundations. It introduces optimizing behavior at an aggregate level, but has difficulty capturing many of the actions essential to the workings of the market economy at a micro level. It is not a first step to discovering a way to model true microfoundations because it is not a tool well-suited to analyzing the behavior of more than one person at a time. Future posts will explore some models that are.

 

 

 

Competition and Market Power Why a "Well-Regulated" Market is an Impossible Ideal

Standard accounts of basic economics usually begin by outlining the features of “perfect competition.” For example, Mankiw’s popular Principles of Economics defines a perfectly competitive market as one that satisfies the following properties

  1. The goods offered for sale are all exactly the same
  2. the buyers and sellers are so numerous that no single buyer or seller has any influence over the market price.

    The implication of these two properties is that all firms in a perfectly competitive market are “price takers.” If any firm tried to set a price higher than the current market price, their sales would immediately drop to nothing as consumers shift to other firms offering the exact same good for a cheaper price. As long as firms can enter and exit a market freely, perfect competition also implies zero profits. Any market experiencing positive profits would quickly see entry as firms try to take advantage of the new opportunity. The entry of new firms increases the supply of the good, which reduces the price and therefore pushes profits down.

After defining the perfectly competitive market, the standard account begins to extol its virtues. In particular, a formal result called the First Welfare Theorem shows that in a perfectly competitive equilibrium, the allocation of goods is Pareto efficient (which just means that no other allocation could make somebody better off without making somebody else worse off). So markets are great. Without any planner or government oversight of any kind, they arrive at an efficient outcome on their own.

But soon after developing the idea, we begin to poke holes in perfect competition. How many markets can really be said to have a completely homogeneous good? How many markets have completely free entry and no room for firms to set their own price? It’s pretty hard to answer anything other than zero. Other issues also arise when we begin to think about the way markets work in reality. The presence of externalities (costs to society that are not entirely paid by the individuals making a decision – pollution is the classic example), causes the first welfare theorem to break down. And so we open the door for government intervention. If the perfectly competitive market is so good, and reality differs from this ideal, doesn’t it make sense for governments to correct these market failures, to break up monopolies, to deal with externalities?

Maybe, but it’s not that simple. The perfect competition model, despite being a cool mathematical tool that is sometimes useful in deriving economic results, is also an unrealistic benchmark. As Hayek points out in his essay, “The Meaning of Competition,” the concept of “perfect competition” necessarily requires that “not only will each producer by his experience learn the same facts as every other but also he will thus come to know what his fellows know and in consequence the elasticity of the demand for his own product.” When held to this standard, nobody can deny that markets constantly fail.

The power of the free market, however, has little to do with its ability to achieve the conditions of perfect competition. In fact, that model leaves out many of the factors that would be considered essential to a competitive market. Harold Demsetz points out this problem in an analysis of antitrust legislation.

[The perfect competition model] is not very useful in a debate about the efficacy of antitrust precedent. It ignores technological competition by taking technology as given. It neglects competition by size of firm by assuming that the atomistically sized firm is the efficiently sized firm. It offers no productive role for reputational competition because it assumes full knowledge of prices and goods, and it ignores competition to change demands by taking tastes as given and fully known. Its informational and homogeneity assumptions leave no room for firms to compete by being different from other firms. Within its narrow confines, the model examines the consequences of only one type of competition, price competition between known, identical goods produced with full awareness of all technologies. This is an important conceptual form of competition, and when focusing on it alone we may speak sensibly about maximizing the intensity of competition. Yet, this narrowness makes the model a poor source of standards for antitrust policy.
Demsetz (1992) – How Many Cheers For Antitrust’s 100 Years?

Although the types of competition outlined by Demsetz are a sign of market power by firms, they are not necessarily a sign that the market has failed or that governments can improve the situation. Let me tell a simple story to illustrate this point. Assume a firm develops a new technology that they are able to prevent other firms from immediately replicating (either because of a patent, secrecy, a high fixed cost of entry, etc.). This firm is now a monopoly producer of that product and can therefore set a price much higher than its cost and make a large profit. The government sees this development and orders the firm to release its plans so that others can replicate the technology and produce their own version. Prices fall as new firms enter and profits go to zero. Consumers are better off since prices are lower and they have a larger choice of products. (A similar story could also be told if the government simply mandated a lower price by monopoly firms).

But the story isn’t over. If I’m another entrepreneur (or even an existing firm) watching this sequence of events, I’m a bit worried. That new idea I was thinking about is going to cost a lot. If I had the possibility to make a large profit, maybe I would be willing to take the risk and go for it anyway. If, on the other hand, I knew for sure that even when I achieve success the government immediately reduces my profits to zero, am I still going to undertake that project? Not a chance. The potential for future profits is an incredibly important incentive for innovation.

Here’s another example from the real world that illustrates the opposite case. In the late 1990s, Microsoft tried to bundle Internet Explorer with their Windows operating system (essentially giving away Explorer for free). This move made it difficult for independent internet browsers to compete (Netscape was the market leader at the time). An antitrust lawsuit was brought against Microsoft and they were initially ordered to break up (which never actually occurred in the end as far as I know, but that doesn’t matter for the story). In the EU, they were required to provide a browser choice page when installing Windows.

In each of the two examples above, there is a clear tradeoff. In the first, consumers are better off in the short run (lower prices), but potentially worse off in the long run (less innovation). The second case is exactly the opposite. Consumers are worse off in the short run (they don’t get a browser for free) but potentially better off in the long run (more browser competition). Can we say for sure whether regulation helps or hurts in either case? Can we even say whether the regulation would push the market to be more competitive or less? I don’t see how (but which browser you are using right now despite the relatively lenient restrictions on Microsoft might give some indication).

I’m not saying regulation is never a good idea in theory. But in practice, it turns out to be really hard. Even in the cases above where it is obvious that a firm is trying to take advantage of monopoly power, it remains unclear whether a move closer to “perfect competition” will result in an increase in actual competition. You can of course pick apart the stories above and come up with some regulatory scheme that balances present and future costs and benefits. But doing so in general would require governments to have even more information than the already ridiculous knowledge assumptions implicit in the perfect competition model. It’s easy to point out imperfections in markets. It’s much harder to figure out what to do about them.

Notice that I haven’t necessarily made an argument against regulation. The takeaway from this post should not be that markets always work or that regulation always fails (I’ll leave that for future posts!). My point is simply that pointing out a flaw in the free market does not automatically imply an opportunity for a regulatory solution. The question is much more complicated than that.

But having said that let me leave you with one final thought. Markets are incredibly dynamic. Whenever the market “fails,” all it takes is one clever entrepreneur to come up with a better method and correct the failure. When government fails? Well, maybe we can come back to that in ten years when they get around to discussing it.

Kevin Malone Economics

Don't Be Like Kevin (Image Source: Wikimedia Commons)
Don’t Be Like Kevin (Image Source: Wikimedia Commons)

In one my favorite episodes of the TV show The Office (A Benihana Christmas), dim-witted accountant Kevin Malone faces a choice between two competing office Christmas parties: the traditional Christmas party thrown by Angela, or Pam and Karen’s exciting new party. After weighing various factors (double fudge brownies…Angela), Kevin decides “I think I’ll go to Angela’s party, because that’s the party I know.” I can’t help but see parallels between Kevin’s reasoning and some of the recent discussions about the role of DSGE models in macroeconomics. It’s understandable. Many economists have poured years of their lives into this research agenda. Change is hard. But sticking with something simply because that’s the way it has always been done is not, in my opinion, a good way to approach research.

I was driven to write this post after reading a recent post by Roger Farmer, which responds to this article by Steve Keen, which is itself responding to Olivier Blanchard’s recent comments on the state of macroeconomics. Lost yet? Luckily you don’t really need to understand the  entire flow of the argument to get my point. Basically, Keen argues that the assumption of economic models that the system is always in equilibrium is poorly supported. He points to physics for comparison, where Edward Lorenz showed that the dynamics underlying weather systems were not random, but chaotic – fully deterministic, but still impossible to predict. Although his model had equilibria, they were inherently unstable, so the system continuously fluctuated between them in an unpredictable fashion.

Keen’s point is that economics could easily be a chaotic system as well. Is there any doubt that the interactions between millions of people that make up the economic system are at least as complex as the weather? Is it possible that, like the weather, the economy is never actually in equilibrium, but rather groping towards it (or, in the case of multiple equilibria, towards one of them)? Isn’t there at least some value in looking for alternatives to the DSGE paradigm?

Roger begins his reply to Keen by claiming, “we tried that 35 years ago and rejected it. Here’s why.” I was curious to read about why it was rejected, but he goes on to say that the results of these attempts to examine whether there is complexity and chaos in economics were that “we have no way of knowing given current data limitations.” Echoing this point, a survey of some of these tests by Barnett and Serletis concludes

We do not have the slightest idea of whether or not asset prices exhibit chaotic nonlinear dynamics produced from the nonlinear structure of the economy…While there have been many published tests for chaotic nonlinear dynamics, little agreement exists among economists about the correct conclusions.
Barnett and Serletis (2000) – Martingales, nonlinearity, and chaos

That doesn’t sound like rejection to me. We have two competing theories without a good way of determining which is more correct. So why should we rigidly stick to one? Best I can tell, the only answer is Kevin Malone Economics™. DSGE models are the way it has always been done, so unless there is some spectacular new evidence that an alternative does better, let’s just stay with what we know. I don’t buy it. The more sensible way forward in my opinion is to cultivate both approaches, to build up models using DSGE as well as alternatives until we have some way of choosing which is better. Some economists have taken this path and begun to explore alternatives, but they remain on the fringe and are largely ignored (I will have a post on some of these soon, but here is a preview). I think this is a huge mistake.

Since Roger is one of my favorite economists in large part because he is not afraid to challenge the orthodoxy, I was a bit surprised to read this argument from him. His entire career has been devoted to models with multiple equilibria and sunspot shocks. As far as I know (and I don’t know much so I could very well be wrong), these features are just as difficult to confirm empirically as is the existence of chaos. By spending an entire career developing his model, he has gotten it to a place where it can shed light on real issues (see his excellent new book, Prosperity for All), but its acceptance within the profession is still low. In his analysis of alternatives to standard DSGE models, Michael De Vroey writes:

Multiple equilibria is a great idea that many would like to adopt were it not for the daunting character of its implementation. Farmer’s work attests to this. Nonetheless, it is hard to resist the judgement that, for all its panache, it is based on a few, hard to swallow, coups de force…I regard Farmer’s work as the fruit of a solo exercise attesting to both the inventiveness of its performer and the plasticity of the neoclassical toolbox. Yet in the Preface of his book [How the Economy Works], he expressed a bigger ambition, “to overturn a way of thinking that has been established among macroeconomists for twenty years.” To this end, he will need to gather a following of economists working on enriching his model
De Vroey (2016) – A History of Macroeconomics from Keynes to Lucas and Beyond

In other words, De Vroey’s assessment of Roger’s work is quite similar to Roger’s own assessment of non-DSGE models: it’s not a bad idea and there’s possibly some potential there, but it’s not quite there yet. Just as I doubt Roger is ready to tear up his work and jump back in with traditional models (and he shouldn’t), neither should those looking for alternatives to DSGE give up simply because the attempts to this point haven’t found anything revolutionary.

I’m not saying all economists should abandon DSGE models. Maybe they are a simple and yet still adequate way of looking at the world. But maybe they’re not. Maybe there are alternatives that provide more insight into the way a complex economy works. Attempts to find these alternatives should not be met with skepticism. They shouldn’t have to drastically outperform current models in order to even be considered (especially considering current models aren’t doing so hot anyway). Of course there is no way any alternative model will be able to stand up to a research program that has gone through 40 years of revision and improvement (although it’s not clear how much improvement there has really been). The only way to find out if there are alternatives worth pursuing is to welcome and encourage researchers looking to expand the techniques of macroeconomics beyond equilibrium and beyond DSGE. If even a fraction of the brainpower currently being applied to DSGE models were shifted to looking for different methods, I am confident that a new framework would flourish and possibly come to stand beside or even replace DSGE models as the primary tool of macroeconomics.

At the end of the episode mentioned above, Kevin (along with everybody else in the office) ends up going to Pam and Karen’s party, which turns out to be way better than Angela’s. I can only hope macroeconomics continues to mirror that plot.

What’s Wrong With Modern Macro? Part 6 The Illusion of Microfoundations I: The Aggregate Production Function

Part 6 in a series of posts on modern macroeconomics. Part 4 noted the issues with using TFP as a measure of technology shocks. Part 5 criticized the use of the HP filter. Although concerning, neither of these problems is necessarily insoluble. With a better measure of technology and a better filtering method, the core of the RBC model would survive. This post begins to tear down that core, starting with the aggregate production function.


In the light of the negative conclusions derived from the Cambridge debates and from the aggregation literature, one cannot help asking why [neoclassical macroeconomists] continue using aggregate production functions
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

Remember that one of the primary reasons DSGE models were able to emerge as the dominant macroeconomic framework was their supposed derivation from “microfoundations.” They aimed to explain aggregate phenomena, but stressed that these aggregates could only come from optimizing behavior at a micro level. The spirit of this idea seems like a step in the right direction.

In its most common implementations, however, “microfoundations” almost always fails to capture this spirit. The problem with trying to generate aggregate implications from individual decision-making is that keeping track of decisions from a diverse group of agents quickly becomes computationally and mathematically unmanageable. This reality led macroeconomists to make assumptions that allowed for easy aggregation. In particular, the millions of decision-makers throughout the economy were collapsed into a single representative agent that owns a single representative firm. Here I want to focus on the firm side. A future post will deal with the problems of the representative agent.

An Aggregate Production Function

In the real world, producing any good is complicated. It not only requires an entrepreneur to have an idea for the production of a new good, but also the ability to implement that idea. It requires a proper assessment of the resources needed, the organization of a firm, hiring good workers and managers, obtaining investors and capital, properly estimating consumer demand and competition from other firms and countless other factors.

In macro models, production is simple. In many models, a single firm produces a single good, using labor and capital as its only two inputs. Of course we cannot expect the model to match many of the features of reality. It is supposed to simplify. That’s what makes it a model. But we also need to remember Einstein’s famous insight that everything should be made as simple as possible, but no simpler. Unfortunately I think we’ve gone way past that point.

Can We Really Measure Capital?

In the broadest possible categorization, productive inputs are usually placed into three categories: land, labor, and capital. Although both land and labor vary widely in quality making it difficult to aggregate, at least there are clear choices for their units. Adding up all of the useable land area and all of the hours worked at least gives us a crude measure of the quantity of land and labor inputs.

But what can we use for capital? Capital goods are far more diverse than land or labor, varying in size, mobility, durability, and thousands of other factors. There is no obvious measure that can combine buildings, machines, desks, computers, and millions of other specialized pieces of capital equipment. The easiest choice is to use the value of the goods, but what is their value? The price at which they were bought? An estimate of the value of the production they will bring about in the future? And what units will this value be? Dollars? Labor hours needed to produce the good?

None of these seem like good options. As soon as we go to value, we are talking about units that depend on the structure of the economy itself. An hour of labor is an hour of labor no matter what the economy looks like. With capital it gets more complicated. The same computer has a completely different value in 1916 than it does in 2016 no matter which concept of value is employed. That value is probably related to its productive capacity, but the relationship is far from clear. If the value of a firm’s capital stock has increased, can it actually produce more than before?

This question was addressed in the 1950s and 60s by Joan Robinson. Here’s how she summed up the problem:

Moreover, the production function has been a powerful instrument of miseducation. The student of economic theory is taught to write O = f (L, C) where L is a quantity of labour, C a quantity of capital and O a rate of output of commodities.’ He is instructed to assume all workers alike, and to measure L in man-hours of labour; he is told something about the index-number problem involved in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units C is measured. Before ever he does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.
Joan Robinson (1953) – The Production Function and the Theory of Capital

60 years later and we’ve changed the O to a Y and the C to a K, but little else has changed. People have thought hard about how to measure capital and try to deal with the issues (Here is a 250 page document on the OECD’s methods for example), but the issue remains. We still take a diverse set of capital goods and try to fit them all under a common label. The so called “Cambridge Capital Controversy” was never truly resolved and neoclassical economists simply pushed on undeterred. For a good summary of the debate and its (lack of) resolution see this relatively non-technical paper.

What Assumptions Allow for Aggregation?

Even if we assume that there is some unit that would allow capital to be aggregated, we still face problems when trying to use an aggregate production function. One of the leading researchers working to find the set of conditions that allows for aggregation in production has been Franklin Fisher. Giving a more rigorous treatment to the capital aggregation issue, Fisher shows that capital can only be aggregated if all firms share the same constant returns to scale, capital augmenting technical change production function (except for a capital efficiency coefficient). If you’re not an economist, that condition doesn’t make much sense, but know that it is incredibly restrictive.

The problem doesn’t get much better when we move away from capital and try to aggregate labor and output. Fisher shows that aggregation in these concepts is only possible when there is no specialization (everybody can do everything) and all firms have the ability to produce every good (the amount of each good can change). Other authors have derived different conditions for aggregation, but none of these appear to be any less restrictive.

Do any of these restrictions matter? Even if aggregation is not strictly possible, as long as the model was close enough there wouldn’t be a real criticism. Fisher (with co-author Jesus Felipe) surveys the many arguments against aggregate production functions and addresses many of these kinds of counterarguments, ultimately concluding

The aggregation problem and its consequences, and the impossibility of testing empirically the aggregate production function…are substantially more serious than a mere anomaly. Macroeconomists should pause before continuing to do applied work with no sound foundation and dedicate some time to studying other approaches to value, distribution, employment, growth, technical progress etc., in order to understand which questions can legitimately be posed to the empirical aggregate data.
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

As far as I know, these concerns have not been addressed in a serious way. The aggregate production function continues to be a cornerstone of macroeconomic models. If it is seriously flawed, almost all of the work done in the last forty years becomes suspect.

But don’t worry, the worst is still to come.

What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.


A plot of real GDP in the United States since 1947 looks like this

Real GDP in the United States since 1947. Data from FRED
Real GDP in the United States since 1947. Data from FRED

Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this

Real GDP against HP trend (smoothing parameter = 1600)
Real GDP against HP trend (smoothing parameter = 1600)

By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with

Deviations from HP trend
Deviations from HP trend

The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.

Let’s go back one more time to these graphs

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.

Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.

There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.

TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.

Was the Great Recession a Return to Trend?

As I was putting together some graphs for my post on HP filtering (coming soon), I started playing around with some real GDP data. First, as many have done before, I fit a simple linear trend to the log of real GDP data since 1947

Log Real GDP (blue) and fitted trendline (orange). Trend annualized growth rate = 3.25%. Data from FRED
Log Real GDP (blue) and fitted trendline (orange). Trend annualized growth rate = 3.25%. Data from FRED

Plotting the data in this way, the Great Recession is clearly visible and seems to have caused a permanent reduction in real GDP as the economy does not seem to be returning to its previous trend. Taking the log of GDP is nice because it allows us to interpret changes as percentage increases. Therefore, the linear trend implies a constant percentage growth rate for the economy. But is there any reason we should expect the economy to grow at a constant rate over time? What if growth is not exponential at all, and instead grows at a slowly decreasing rate over time? Here’s what happens when we take a square root of real GDP instead of a natural log

Square Root Real GDP (blue) and fitted trendline (orange). Data from FRED
Square Root Real GDP (blue) and fitted trendline (orange). Data from FRED

I have no theoretical justification for taking the square root of RGDP data, but it fits a linear trend remarkably well (slightly better than log data). And here the Great Recession is gone. Instead, we see a ten year period above trend from 1997-2007 followed by a sharp correction. If economic growth is following a quadratic growth pattern rather than an exponential one, it implies that the growth rate is decreasing over time rather than remaining constant. Here’s an illustration of what growth rates would look like in each scenario

Trend growth rates under the assumption of exponential growth (orange) and quadratic growth (blue)
Trend growth rates under the assumption of exponential growth (orange) and quadratic growth (blue)

Fitting one trend line doesn’t necessarily mean anything, so to better test which growth pattern seemed more plausible, I decided to try to fit each trend through 1986 and then use those trends to predict current RGDP. Actual annualized RGDP in 2016 Q2 was $16.525 trillion. Assuming exponential growth and fitting the data through 1986 predicts a much higher value of $23.353 trillion. A quadratic trend understates actual growth, but comes much closer, with a prediction of $14.670 trillion. I then did the same experiment using data through 1996 and through 2006 and plotted the results

Actual GDP (light blue) vs exponential trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs exponential trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs quadratic trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs quadratic trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)

From these graphs, it appears that assuming quadratic growth, which implies a decreasing growth rate over time, would have done much better in predicting future GDP. Notice especially how close a GDP prediction would have been by simply extrapolating the quadratic trend forward by 10 years in 2006. Had the economy stayed on its 2006 trend through the present, RGDP in the last quarter would have been $16.583 trillion. It was actually $16.525 trillion for a difference of about $58 billion (off by 3 tenths of a percent). That is an astonishingly good prediction for a ten year ahead forecast (Just for fun, extending the present trend to 2026 Q2 predicts $20.200 trillion. Place your bets now).

Of course, this analysis doesn’t prove anything. It could just be a coincidence that the quadratic growth theory happens to fit better than the exponential. But I think it does show that we need to be careful in interpreting the Great Recession.

Using the above pictures, I can come up with two plausible stories. In the first, the Great Recession represents a significant deviation from trend. There was no fundamental reason why we needed to leave the path we were on in 2006 and we should do everything we can to try to get back to that path. Whether that comes through demand side monetary or fiscal stimulus or supply side reforms like cutting regulation and government intervention I will leave aside. But we should do something.

The other story is a bit more pessimistic. In this version, economic growth had been trending downwards for decades. We made a valiant effort to prop up growth on the strength of the internet in the late 1990s and housing in the early 2000s. But these gains were illusory, generated by unsustainable bubbles waiting to be popped rather than true economic progress. The recession, far from an unnecessary catastrophe, was an essential correction required for the economy to reorganize resources and adapt to the low growth reality. Enacting policies that attempt to restore previous growth rates will only fuel new bubbles to replace the old ones, paving the way for future recessions. Doing something will only make things worse.

Which story is more plausible? I wish I knew.