A Different Kind of Economic Modeling

In macroeconomics, research almost always follows a similar pattern. First, the economist comes up with a question. Maybe they look at data and generate some stylized facts about some aspect of the economy. Then they set out to “explain” these facts using a structural economic model (I put explain in quotes because this step usually involves stripping away everything that made the question interesting in the first place). Using their model, they can then make some predictions or do some policy analysis. Finally, they write a paper describing their model and its implications.

There is nothing inherently wrong with this approach to research. But there are some issues. The first is that every paper looks exactly the same. Every paper needs a model. Sometimes papers adapt existing models, but they need enough difference to be a contribution on their own (but not so much difference that you leave the narrow consensus of modern macroeconomic methodology). Rarely, if ever, is there any attempt to compare models, to evaluate their failures and successes. It’s always: previous papers missed this and that feature while mine includes it.

This kind of iterative modeling can give the illusion of progress, but it really just represents sideways movement. The questions of macroeconomics haven’t really changed much in the last 100 years. What we have done is develop more and more answers to those questions without really making any progress on figuring out which of those answers is actually correct. Thousands of answers to a question is in many ways no better than none at all.

I don’t think it’s too much of a mystery why macroeconomics looks this way. Everybody already knows how an evaluation of our current answers to macroeconomic questions would go. The findings: we don’t know anything and all our models stink. I’d be surprised if even 10% of economists would honestly suggest a policymaker to carry out the policy that their papers suggest.

Academics still need to publish of course so they change the criteria that describes good macroeconomic research. Rarely is a paper evaluated on how well it answers an economic question. Instead, what matters is the tool used to answer the question. An empirical contribution without a model will get yawns in a macro seminar. A new mathematical contribution that uses a differential equations derived from a heat diffusion equation from physics? Mouths will be watering.

The claim is that these tools can then be used by other researchers as we continue to get closer and closer to the truth. The reality is that they are used by other researchers, but they only use the tools to develop their own slightly “better” tool in their own paper. In other words, the primary consumer of economic papers is economists who want to write papers. Widely cited papers are seen as better. Why? Because they helped a bunch of other people write their own papers? When does any of this research start to actually be helpful to people who aren’t responsible for creating it? Should we measure the quality of beef by how many cows it can feed?

Again there is an easy explanation for why economists are the only ones who can read economics papers. They would be completely unintelligible to anybody else. Reading and understanding the mechanism behind a macroeconomic paper is often a herculean task even for a trained economist. A non-expert has no chance. There could be good reasons for this complexity. I don’t expect to be able to open a journal on quantum mechanics and get anything out of it. But there is one enormous difference between physics and economics models. The physics ones actually work.

Economics didn’t always look this way. Read a paper by Milton Friedman or Armen Alchian. Almost no equations, much less the giant dynamic systems in models today. Does anybody think modern economic analysis is better than the kind done by those two?

The criticism of doing economics in words rather than math is that it is harder to be internally consistent. An equation has fewer interpretations than a sentence. I’m sympathetic to that argument. But I think there are better ways to add transparency to economics than by writing everything out in math that requires 20 years of school to understand. There are ways to formalize arguments other than systems of equations, ways to explain the mechanism that generates the data other than structural DSGE models.

The problem with purely verbal arguments is that you can easily lose your train of logic. Each sentence can make sense on its own but completely contradict another piece of the argument. Simultaneous systems of equations can prevent this kind of mistake. They are just one way. Computer simulations can provide the same discipline. Let’s say I have some theory about the way the world works. If I can design a computer simulation that replicates the kind of behavior I described in words, doesn’t that prove that my argument is logically consistent? It of course doesn’t mean I am right, but neither does a mathematical model. Each provides a complete framework that an outside observer can evaluate and decide whether its assumptions provide a useful view of the world.

The rest of the profession doesn’t seem to agree with me that a computer simulation and a system of equations serve the same purpose. I’m not exactly sure why. One potential worry is that it’s harder to figure out what’s actually happening in a computer simulation. With a system of equations I can see exactly which variables affect others and the precise channel of each kind of change. In a simulation, outcomes are emergent. Maybe I develop a simulation where an increase in taxes causes output to fall. Simply looking at the rules I have given each agent of how to act might not tell me why that fall occurred. It might be some complex interaction between these agents that generates that result.

That argument makes sense, but I think it only justifies keeping mathematical models rather than throwing out computer models. They serve different purposes. And computer models have their own advantages. One, which has yet to be explored in any serious way, is the potential for visual results. Imagine that the final result of an economic paper was not a long list of greek symbols and equals signs, but rather a full moving mini economic world. Agents move around, trade with each other. Firms set prices, open and close. Output and unemployment rise and fall. A simple version of such a model is the “sugarscape” model of Axtell and Epstein which creates a simple world where agents search for and trade sugar in order to survive.

Now imagine a much more complicated version of that that looked a lot more like a real economy. Rather than being able to “see” the relationships between variables in an equation, I could literally see how agents act and interact visually. My ideal world of economic research would not be writing papers, but creating apps. I want to download your model on my computer and play with it. Change the parameter values, apply different shocks, change the number and types of agents. And then observe what happens. Will this actually tell us anything useful about the economy? I’m not sure. But I think it’s worth a try. (I’m currently trying to do it myself. Hopefully I can post a version of it here soon)

What’s Wrong With Modern Macro? Part 15 Where Do We Go From Here?

I’ve spent 14 posts telling you what’s wrong with modern macro. It’s about time for something positive. Here I hope to give a brief outline of what my ideal future of macro would look like. I will look at four current areas of research in macroeconomics outside the mainstream (some more developed than others) that I think offer a better way to do research than currently accepted methods. I will expand upon each of these in later posts.


Learning and Heterogeneous Expectations

Let’s start with the smallest deviation from current research. In Part 8 I argued that assuming rational expectations, which means that agents in the model form expectations based on a correct understanding of the environment they live in, is far too strong an assumption. To deal with that criticism, we don’t even need to leave the world of DSGE. A number of macroeconomists have explored models where agents are required to learn about how important macroeconomic variables move over time.

These kinds of models generally come in two flavors. First, the econometric learning models summarized in Evans and Honkapohja’s 2001 book, Learning and Expectations in Macroeconomics, which assume that agents in the model are no smarter than the economists that create them. They must therefore use the same econometric techniques to estimate parameters that economists do. Another approach assumes even less about the intelligence of agents by only allowing them to use simple heuristics for prediction. Based on the framework of Brock and Hommes (1997), these heuristic switching models allow agents to hold heterogeneous expectations in equilibrium, an outcome that is difficult to achieve with rational expectations, but prevalent in reality. A longer post will look at these types of models in more detail soon.

Experiments

Most macroeconomic research is based on the same set of historical economic variables. There are probably more papers about the history of US macroeconomics than there are data points. Even if we include all of the countries that provide reliable economic data, that doesn’t leave us with a lot of variation to exploit. In physics or chemistry, an experiment can be run hundreds or thousands of times. In economics, we can only observe one run.

One possible solution is to design controlled experiments aimed to answer macroeconomic questions. The obvious objection to such an idea is that a lab with a few dozen people interacting can never hope to capture the complexities of a real economy. That criticism makes sense until you consider that many accepted models only have one agent. Realism has never been the strong point of macroeconomics. Experiments of course won’t be perfect, but are they worse than what we have now? John Duffy gives a nice survey of some of the recent advances in experimental macroeconomics here, which I will discuss in a future post as well.

Agent Based Models

Perhaps the most promising alternative to DSGE macro models, an agent based model (ABM) attempts to simulate an economy from the ground up inside a computer. In particular, an ABM begins with a group of agents that generally follow a set of simple rules. The computer then simulates the economy by letting these agents interact according to the provided rules. Macroeconomic results are obtained by simply adding the outcomes of individuals.

I will give examples of more ABMs in future posts, but one I really like is a 2000 paper by Peter Howitt and Robert Clower. In their paper they begin with a decentralized economy that consists of shops that only trade two commodities each. Under a wide range of assumptions, they show that in most simulations of an economy, one of the commodities will become traded at nearly every shop. In other words, one commodity become money. Even more interesting, agents in the model coordinate to exploit gains from trade without needing the assumption of a Walrasian Auctioneer to clear the market. Their simple framework has since been expanded to a full fledged model of the economy.

Empirical Macroeconomics

If you are familiar with macroeconomic research, it might seem odd that I put empirical macroeconomics as an alternative path forward. It is almost essential for every macroeconomic paper today to have some kind of empirical component. However, the kind of empirical exercises performed in most macroeconomic papers don’t seem very useful to me. They focus on estimating parameters in order to force models that look nothing like reality to nevertheless match key moments in real data. In part 10 I explained why that approach doesn’t make sense to me.

In 1991, Larry Summers wrote a paper called “The Scientific Illusion in Empirical Macroeconomics” where he distinguishes between formal econometric testing of models and more practical econometric work. He argues that economic work like Friedman and Schwartz’s A Monetary History of the United States, despite eschewing formal modeling and using a narrative approach, contributed much more to our understanding of the effects of monetary policy than any theoretical study. Again, I will save a longer discussion for a future post, but I agree that macroeconomic research should embrace practical empirical work rather than its current focus on theory.


The future of macro should be grounded in diversity. DSGE has had a good run. It has captivated a generation of economists with its simple but flexible setup and ability to provide answers to a great variety of economic questions. Perhaps it should remain a prominent pillar in the foundation of macroeconomic research. But it shouldn’t be the only pillar. Questioning the assumptions that lie at the heart of current models – rational expectations, TFP shocks, Walrasian general equilibrium – should be encouraged. Alternative modeling techniques like agent based modeling should not be pushed to the fringes, but welcomed to the forefront of the research frontier.

Macroeconomics is too important to ignore. What causes business cycles? How can we sustain strong economic growth? Why do we see periods of persistent unemployment, or high inflation? Which government or central bank policies will lead to optimal outcomes? I study macroeconomics because I want to help answer these questions. Much of modern macroeconomics seems to find its motivation instead in writing fancy mathematical models. There are other approaches – let’s set them free.