Does it “Take a Model to Beat a Model”?

Imagine being a chemist in the Middle Ages. Your colleagues are all working on fanciful tasks like trying to develop a philosopher’s stone to grant immortality or turning lead into gold. You continually point out to them that all of their attempts have pretty much failed completely. In fact, based on your own research, you have strong reason to believe that the path they are taking is going completely the wrong direction. They aren’t just failing because they haven’t found the right formula to transform metal into gold, but because the task they have set out to do cannot be done. You urge them to abandon their efforts and focus on other areas, but they don’t seem to listen. Instead, their response: “well sure we haven’t been able to turn lead into gold yet, but can you do any better? I don’t see any gold in your hands either.”

You can probably already see where I’m going with this metaphor. Replace chemist with economist and turning lead into gold with DSGE macroeconomics and you’ll have a good sense of what criticizing macro feels like. I don’t think it’s an exaggeration to say that every critique of macro is invariably going to be met by some form of the same counterargument. Of course the model isn’t perfect, but we’re doing our best. We’re continually adding the features to the benchmark model that you claim we are missing. Heterogeneity, financial markets, even behavioral assumptions. They’re coming. And if you’re so smart, come up with something better. It takes a model to beat a model.

I won’t deny there is some truth to this argument. Just because a model is unrealistic, just because it’s missing some feature of reality, doesn’t mean it isn’t useful. It doesn’t mean it isn’t a reasonable first step on the path to something better. The financial crisis didn’t prove that the methods of macroeconomics are wrong. Claims that nobody in the macro profession is asking interesting questions or trying to implement interesting ideas are demonstrably false. We certainly don’t want criticisms of macro to lead to less study of the topic. The questions are too important. The potential gains from solving problems like business cycles or economic growth too great. And if we don’t have anything better, why not keep pushing forward? Why not take the DSGE apparatus as far as we possibly can?

But what I, and many others, have tried to show, is that the methods of macroeconomics are severely constrained by assumptions with questionable theoretical or empirical backing. The foundation that modern macroeconomics is building on is too shaky to support the kinds of improvements that we hope it will eventually make. Now, you can certainly argue that I am wrong and that DSGE models are perfectly capable of answering the questions we ask. That’s quite possible. But if I am not wrong about the flaws in the method, then we shouldn’t need a new model to think about giving up on the current one. When somebody points out a flaw in the foundation of a building, the proper response is not to keep building until they come up with a better solution. It’s to knock the building down and focus all effort on finding that solution.

I can confidently say that if a better alternative to DSGE exists, I will not be the one to develop it. I am nowhere near smart or creative enough to do that. I don’t think any one person is. What I have been trying to do is convince others that it’s worth devoting a little bit more of our research efforts to exploring other methods and to challenge the fundamental assumptions that have held a near monopoly on macroeconomic research for the last 40 years. The more people that focus on finding a model to beat the current model, the better chance we have to actually find one. As economists, we should at the very least be open to the idea that competition is good.

Where do we start? I think agent based simulation models offer one potential path. The key benefit of moving toward a simulation model over a mathematical one is that concerns about tractability are much less pressing. Many of the most concerning assumptions of standard macro models are made because without them the model becomes unsolvable. With an agent based model, it is much easier to incorporate features like heterogeneity and diverse behavioral assumptions and just let the simulation sort out what happens. Equilibrium in an agent based model is not an assumption, but an emergent result. The downside is that an ABM cannot produce nice closed form analytical solutions. But in a world as complex as ours I think restricting ourselves to only being able to answer questions that allow for a closed form solution is a pretty bad idea – it’s looking for your keys under the streetlight because that’s where the light is.

Maybe even more important than developing specific models to challenge the DSGE benchmark is to try to introduce a little more humility into the modeling process. As I’ve discussed before, there isn’t much of a reason to put any stock in quantitative predictions of models that we know bear little resemblance to reality. Estimating the effects of policy to a decimal point is just not something we are capable of doing right now. Let’s stop pretending we can.

What’s Wrong With Modern Macro? Part 15 Where Do We Go From Here?

I’ve spent 14 posts telling you what’s wrong with modern macro. It’s about time for something positive. Here I hope to give a brief outline of what my ideal future of macro would look like. I will look at four current areas of research in macroeconomics outside the mainstream (some more developed than others) that I think offer a better way to do research than currently accepted methods. I will expand upon each of these in later posts.


Learning and Heterogeneous Expectations

Let’s start with the smallest deviation from current research. In Part 8 I argued that assuming rational expectations, which means that agents in the model form expectations based on a correct understanding of the environment they live in, is far too strong an assumption. To deal with that criticism, we don’t even need to leave the world of DSGE. A number of macroeconomists have explored models where agents are required to learn about how important macroeconomic variables move over time.

These kinds of models generally come in two flavors. First, the econometric learning models summarized in Evans and Honkapohja’s 2001 book, Learning and Expectations in Macroeconomics, which assume that agents in the model are no smarter than the economists that create them. They must therefore use the same econometric techniques to estimate parameters that economists do. Another approach assumes even less about the intelligence of agents by only allowing them to use simple heuristics for prediction. Based on the framework of Brock and Hommes (1997), these heuristic switching models allow agents to hold heterogeneous expectations in equilibrium, an outcome that is difficult to achieve with rational expectations, but prevalent in reality. A longer post will look at these types of models in more detail soon.

Experiments

Most macroeconomic research is based on the same set of historical economic variables. There are probably more papers about the history of US macroeconomics than there are data points. Even if we include all of the countries that provide reliable economic data, that doesn’t leave us with a lot of variation to exploit. In physics or chemistry, an experiment can be run hundreds or thousands of times. In economics, we can only observe one run.

One possible solution is to design controlled experiments aimed to answer macroeconomic questions. The obvious objection to such an idea is that a lab with a few dozen people interacting can never hope to capture the complexities of a real economy. That criticism makes sense until you consider that many accepted models only have one agent. Realism has never been the strong point of macroeconomics. Experiments of course won’t be perfect, but are they worse than what we have now? John Duffy gives a nice survey of some of the recent advances in experimental macroeconomics here, which I will discuss in a future post as well.

Agent Based Models

Perhaps the most promising alternative to DSGE macro models, an agent based model (ABM) attempts to simulate an economy from the ground up inside a computer. In particular, an ABM begins with a group of agents that generally follow a set of simple rules. The computer then simulates the economy by letting these agents interact according to the provided rules. Macroeconomic results are obtained by simply adding the outcomes of individuals.

I will give examples of more ABMs in future posts, but one I really like is a 2000 paper by Peter Howitt and Robert Clower. In their paper they begin with a decentralized economy that consists of shops that only trade two commodities each. Under a wide range of assumptions, they show that in most simulations of an economy, one of the commodities will become traded at nearly every shop. In other words, one commodity become money. Even more interesting, agents in the model coordinate to exploit gains from trade without needing the assumption of a Walrasian Auctioneer to clear the market. Their simple framework has since been expanded to a full fledged model of the economy.

Empirical Macroeconomics

If you are familiar with macroeconomic research, it might seem odd that I put empirical macroeconomics as an alternative path forward. It is almost essential for every macroeconomic paper today to have some kind of empirical component. However, the kind of empirical exercises performed in most macroeconomic papers don’t seem very useful to me. They focus on estimating parameters in order to force models that look nothing like reality to nevertheless match key moments in real data. In part 10 I explained why that approach doesn’t make sense to me.

In 1991, Larry Summers wrote a paper called “The Scientific Illusion in Empirical Macroeconomics” where he distinguishes between formal econometric testing of models and more practical econometric work. He argues that economic work like Friedman and Schwartz’s A Monetary History of the United States, despite eschewing formal modeling and using a narrative approach, contributed much more to our understanding of the effects of monetary policy than any theoretical study. Again, I will save a longer discussion for a future post, but I agree that macroeconomic research should embrace practical empirical work rather than its current focus on theory.


The future of macro should be grounded in diversity. DSGE has had a good run. It has captivated a generation of economists with its simple but flexible setup and ability to provide answers to a great variety of economic questions. Perhaps it should remain a prominent pillar in the foundation of macroeconomic research. But it shouldn’t be the only pillar. Questioning the assumptions that lie at the heart of current models – rational expectations, TFP shocks, Walrasian general equilibrium – should be encouraged. Alternative modeling techniques like agent based modeling should not be pushed to the fringes, but welcomed to the forefront of the research frontier.

Macroeconomics is too important to ignore. What causes business cycles? How can we sustain strong economic growth? Why do we see periods of persistent unemployment, or high inflation? Which government or central bank policies will lead to optimal outcomes? I study macroeconomics because I want to help answer these questions. Much of modern macroeconomics seems to find its motivation instead in writing fancy mathematical models. There are other approaches – let’s set them free.

What’s Wrong With Modern Macro? Part 8 Rational Expectations Aren't so Rational

Part 8 in a series of posts on modern macroeconomics. This post continues the criticism of the underlying assumptions of most macroeconomic models. It focuses on the assumption of rational expectations, which was described briefly in Part 2. This post will summarize many of the points I made in a longer survey paper I wrote about expectations in macroeconomics that can be found here.


What economists imagine to be rational forecasting would be considered obviously irrational by anyone in the real world who is minimally rational
Roman Frydman (2013) – Rethinking Economics p. 148

Why Rational Expectations?

Although expectations play a large role in many explanations of business cycles, inflation, and other macroeconomic data, modeling expectations has always been a challenge. Early models of expectations relied on backward looking adaptive expectations. In other words, people form their expectations by looking at past trends and extrapolating them forward. Such a process might seem plausible, but there is a substantial problem with using a purely adaptive formulation of expectations in an economic model.

For example, consider a firm that needs to choose how much of a good to produce before it learns the price. If it expects the price to be high, it will want to produce a lot, and vice versa. If we assume firms expect today’s price to be the same as tomorrow’s, they will consistently be wrong. When the price is low, they expect a low price and produce a small amount. But the low supply leads to a high price in equilibrium. A smart firm would see their errors and revise their expectations in order to profit. As Muth argued in his original defense of rational expectations, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” In equilibrium, these kinds of profit opportunities would be eliminated by intelligent entrepreneurs.

The solution proposed by Muth and popularized in macro by Lucas, was to simply assume that agents had the same model of the economy as the economist. Under rational expectations, an agent does not need to look at past data to make a forecast. Instead, their expectations are model based and forward looking. If an economist can detect consistent errors in an agent’s forecasting, rational expectations assumes that the agents themselves can also detect these errors and correct them.

What Does the Data Tell Us?

The intuitive defense of rational expectations is appealing and certainly rational expectations marked an improvement over previous methods. But if previous models of expectations gave agents too little cognitive ability, rational expectations models give them far too much. Rational expectations leaves little room for disagreement between agents. Each one needs to instantly jump to the “correct” model of the economy (which happens to correspond to the one created by the economist) and assume every other agent has made the exact same jump. As Thomas Sargent put it, rational expectations leads to a “communism of models. All agents inside the model, the econometrician, and God share the same model.”

The problem, as Mankiw et al note, is that “the data easily reject this assumption. Anyone who has looked at survey data on expectations, either those of the general public or those of professional forecasters, can attest to the fact that disagreement is substantial.” Branch (2004) and Carroll (2003) offer further evidence that heterogeneous expectations play an important role in forecasting. Another possible explanation for disagreement in forecasts is that agents have access to different information. Even if each agent knew the correct model of the economy, having access to private information could lead to a different predictions. Mordecai Kurz has argued forcefully that disagreement does not stem from private information, but rather different interpretations of the same information.

Experimental evidence also points to heterogeneous expectations. In a series of “learning to forecast” experiments, Cars Hommes and coauthors have shown that when agents are asked to forecast the price of an asset generated by an unknown model, they appear to resort to simple heuristics that often differ substantially from the rational expectations forecast. Surveying the empirical evidence surrounding the assumptions of macroeconomics as a whole, John Duffy concludes “the evidence to date suggests that human subject behavior is often at odds with the standard micro-assumptions of macroeconomic models.”

“As if” Isn’t Enough to Save Rational Expectations

In response to concerns about the assumptions of economics, Milton Friedman offered a powerful defense. He agreed that it was ridiculous to assume that agents make complex calculations when making economic decisions, but claimed that that is not at all an argument against assuming that they made decisions as if they knew this information. Famously, he gave the analogy of an expert billiard player. Nobody would ever believe that the player planned all of his shots using mathematical equations to determine the exact placement, and yet a physicist who assumed he did make those calculations could provide an excellent model of his behavior.

The same logic applies in economics. Agents who make forecasts using incorrect models will be driven out as they are outperformed by those with better models until only the rational expectations forecast remains. Except this argument only works if rational expectations is actually the best forecast. As soon as we admit that people use various models to forecast, there is no guarantee it will be. Even if some agents know the correct model, they cannot predict the path of economic variables unless they are sure that others are also using that model. Using rational expectations might be the best strategy when others are also using it, but if others are using incorrect models, it may be optimal for me to use an incorrect model as well. In game theory terms, rational expectations is a Nash Equilibrium, but not a dominant strategy (Guesnerie 2010).

Still, Friedman’s argument implies that we shouldn’t worry too much about the assumptions underlying our model as long as it provides predictions that help us understand the world. Rational expectations models also fail in this regard. Even after including many ad hoc fixes to standard models like wage and price stickiness, investment adjustment costs, inflation indexing, and additional shocks, DSGE models with rational expectations still provide weak forecasting ability, losing out to simpler reduced form vector autoregressions (more on this in future posts).

Despite these criticisms, we still need an answer to the Lucas Critique. We don’t want agents to ignore information that could help them to improve their forecasts. Rational expectations ensures they do not, but it is far too strong. Weaker assumptions on how agents use the information to learn and improve their forecasting model over time retain many of the desirable properties of rational expectations while dropping some of its less realistic ones. I will return to these formulations in a future post (but read the paper linked in the intro – and here again – if you’re interested).

 

What Does it Mean to Be Rational?

One of the most basic assumptions that underlies economic analysis is that people are rational. Given a set of possible choices, an individual in an economic model always chooses the one they think will give them the largest net benefit. This assumption has come under attack by many within the profession as well as by outsiders.

Some of these criticisms are easy to reject. For example, it is common for non-economists to make the jump from rationality to selfishness and denounce economics for assuming all humans only care about themselves. Similar criticisms point to economics for being too materialistic when real people actually value intangible goods as well. Neither of these arguments hits the mark. When an economist talks of rationality, they mean it in the broadest possible way. If donating to charity or giving gifts or going to poor countries to build houses are important for you, economics does not say you are wrong to choose those things. Morality or duty can have just as powerful an impact on your choices as material desires. Economics makes no distinction between these categories. You choose what you prefer. The reason you prefer it is irrelevant for an economist.

More thoughtful criticisms argue that even after defining preferences correctly, people sometimes make decisions that actively work against their own interests. The study of these kinds of issues with decision-making  has come to be called “behavioral economics.” One of the most prominent behavioral economists, Richard Thaler, likes to use the example of the Herzfeld Caribbean Basin Fund. Despite its NASDAQ ticker CUBA, the fund has little to do with the country Cuba. Nevertheless, when Obama announced an easing of Cuban trade restrictions in 2014, the stock price increased 70%. A more recent example occurred when investors swarmed to Nintendo stock after the release of Pokemon Go before realizing that Nintendo doesn’t even make the game.

But do these examples show that people are irrational? I don’t think so. There needs to be a distinction between being rational and being right. Making mistakes doesn’t mean that your original goal wasn’t to maximize your own utility, it just means that the path you chose was not the best way to accomplish that goal. To quote Ludwig von Mises, “human action is necessarily always rational” because “the ultimate end of action is always the satisfaction of some desires” (Human Action, p. 18). He gives the example of doctors 100 years ago trying to treat cancer. Although the methods used at that time may appear irrational by modern standards, those methods were what the doctors considered to be the best given the knowledge available at the time. Mises even applies the same logic to emotional actions. Rather than call emotionally charged actions irrational, Mises instead chooses to argue that “emotions disarrange valuations”  (p. 16). Again, an economist cannot differentiate between the reasons an individual chooses to act. Anger is as justifiable as deliberate calculation.

That is not to say that the behavioralists don’t have a point. They just chose the wrong word. People are not irrational. They just happen to be wrong quite often. The confusion is understandable, however, because rationality itself is not well defined. Under my definition, rationality only requires that agents make the best decision possible given their current knowledge. Often economic models go a step further and make specific (and sometimes very strong) assumptions about what that knowledge should include. Even in models with uncertainty, people are usually expected to know the probabilities of events occurring. In reality, people often face not only unknown events, but unknowable events that could never be predicted beforehand. In such a world, there is no reason to expect that even the most intelligent rational agent could make a decision that appears rational in a framework with complete knowledge.

Roman Frydman describes this point nicely

After uncovering massive evidence that contemporary economics’ standard of rationality fails to capture adequately how individuals actually make decisions, the only sensible conclusion to draw was that this standard was utterly wrong. Instead, behavioral economists concluded that individuals are less than fully rational or are irrational
Rethinking Expectations: The Way Forward For Macroeconomics, p. 148-149

Don’t throw out rationality. Throw out the strong knowledge assumptions. Perhaps the worst of these is the assumption of “rational expectations” in macroeconomics. I will have a post soon arguing that they are not really rational at all.

What’s Wrong With Modern Macro? Part 2 The Death of Keynesian Economics: The Lucas Critique, Microfoundations, and Rational Expectations

Part 2 in a series of posts on modern macroeconomics. Part 1 covered Keynesian economics, which dominated macroeconomic thinking for around thirty years following World War II. This post will deal with the reasons for the demise of the Keynesian consensus and introduce some of the key components of modern macro.


The Death of Keynesian Economics

Milton Friedman - Wikimedia Commons
Milton Friedman – Wikimedia Commons

Although Keynes had contemporary critics (most notably Hayek, if you haven’t seen the Keynes vs Hayek rap videos, stop reading now and go watch them here and here), these criticisms generally remained outside of the mainstream. However, a more powerful challenger to Keynesian economics arose in the 1950s and 60s: Milton Friedman. Keynesian theory offered policymakers a set of tools that they could use to reduce unemployment. A key empirical and theoretical result was the existence of a “Phillips Curve,” which posited a tradeoff between unemployment and inflation. Keeping unemployment under control simply meant dealing with slightly higher levels of inflation.

Friedman (along with Edmund Phelps), argued that this tradeoff was an illusion. Instead, he developed the Natural Rate Hypothesis. In his own words, the natural rate of unemployment is

the level that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labour and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.
Milton Friedman, “The Role of Monetary Policy,” 1968

If you don’t speak economics, the above essentially means that the rate of unemployment is determined by fundamental economic factors. Governments and central banks cannot do anything to influence this natural rate. Trying to exploit the Phillips curve relationship by increasing inflation would fail because eventually workers would simply factor the inflation into their decisions and restore the previous rate of employment at a higher level of prices. In the long run, printing money could do nothing to improve unemployment.

Friedman’s theories couldn’t have come at a better time. In the 1970s, the United States experienced stagflation, high levels of both inflation and unemployment, an impossibility in the Keynesian models, but easily explained by Friedman’s theory. The first blow to Keynesian economics had been dealt. It would not survive the fight that was still to come.

The Lucas Critique

While Friedman may have provided the first blow, Robert Lucas landed the knockout punch on Keynesian economics. In a 1976 article, Lucas noted that standard economic models of the time assumed people were excessively naive. The rules that were supposed to describe their behavior were invariant to policy changes. Even when they knew about a policy in advance, they were unable to use the information to improve their forecasts, ensuring that those forecasts would be wrong. In Lucas’s words, they made “correctibly incorrect” forecasts.

The Lucas Critique was devastating for the Keynesian modeling paradigm of the time. By estimating the relationships between aggregate variables based on past data, these models could not hope to capture the changes in individual actions that occur when the economy changes. In reality, people form their own theories about the economy (however simple they may be) and use those theories to form expectations about the future. Keynesian models could not allow for this possibility.

Microfoundations

At the heart of the problem with Keynesian models prior to the Lucas critique was their failure to incorporate individual decisions. In microeconomics, economic models almost always begin with a utility maximization problem for an individual or a profit maximization problem for a firm. Keynesian economics attempted to skip these features and jump straight to explaining output, inflation, and other aggregate features of the economy. The Lucas critique demonstrated that this strategy produced some dubious results.

The obvious answer to the Lucas critique was to try to explicitly build up a macroeconomic model from microeconomic fundamentals. Individual consumers and firms returned once again to macroeconomics. Part 3 will explore some specific microfounded models in a bit more detail. But first, I need to explain one other idea that is essential to Lucas’s analysis and to almost all of modern macroeconomics: rational expectations

Rational Expectations

Think about a very simple economic story where a firm has to decide how much of a good to produce before they learn the price (such a model is called a “cobweb model”). Clearly, in order to make this decision a firm needs to form expectations about what the price will be (if they expect a higher price they will want to produce more). In most models before the 1960s, firms were assumed to form these expectations by looking at the past (called adaptive expectations). The problem with this assumption is that it creates predictable forecast errors.

For example, assume that all firms simply believe that today’s price will be the same as yesterday. Then when the price yesterday was high, firms produce a lot, pushing down the price today. Then tomorrow, firms expect a low price so don’t produce very much, which pushes the price back up. A smart businessman would quickly realize that their expectations are always wrong and try to find a new way to predict future prices.

A similar analysis led John Muth to introduce the idea of rational expectations in a 1961 paper. Essentially, Muth argued that if economic theory predicted a difference between expectation and result, the people in the model should be able to do the same. As he describes, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” Since only the best firms would survive, in the end expectations will be “essentially the same as the predictions of the relevant economic theory.”

Lucas took Muth’s idea and applied it to microfounded macroeconomic models. Agents in these new models were aware of the structure of the economy and therefore could not be fooled by policy changes. When a new policy is announced, these agents could anticipate the economic changes that would occur and adjust their own behavior. Microfoundations and rational expectations formed the foundation of a new kind of macroeconomics. Part 3 will discuss the development of Dynamic Stochastic General Equilibrium (DSGE) models, which have dominated macroeconomics for the last 30 years (and then in part 4 I promise I will actually get to what’s wrong with modern macro).