*Part 12 in a series of posts on modern macroeconomics. This post draws inspiration from Axel Leijonhufvud’s distinction between models and theories in order to argue that the current procedure of performing macroeconomic research is flawed.*

In a short 1997 article, Axel Leijonhufvud gives an interesting argument that economists too often fail to differentiate between models and theories. In his words,

For many years now, ‘model’ and ‘theory’ have been widely used as interchangeable terms in the profession. I want to plead the case that there is a useful distinction to be made. I propose to conceive of economic ‘theories’ as sets of beliefs about the economy and how it functions. They refer to the ‘real world’ – that curious expression that economists use when it occurs to them that there is one. ‘Models’ are formal but partial representations of theories. A model never encompasses the entire theory to which it refers.

Leijonhufvud (1997) – Models and Theories

The whole article is worth reading and it highlights a major problem with macroeconomic research: we don’t have a good way to determine when a theory is wrong. Standard economic theory has placed a large emphasis on models. Focusing on mathematical models ensures that assumptions are laid out clearly and that the implications of those assumptions are internally consistent. However, this discipline also constrains models to be relatively simple. I might have some grand theory of how the economy works, but even if my theory is entirely accurate, there is little chance that it can be converted into a tractable set of mathematical equations.

The result of this discrepancy between model and theory makes it very difficult to judge the success or failure of a theory based on the success or failure of a model. As discussed in an earlier post, we begin from the premise that all models are wrong and therefore it shouldn’t be at all surprising when a model fails to match some feature of reality. When a model is disproven along some margin, there are two paths a researcher can take: reject the theory or modify the assumptions of the model to better fit the theory. Again quoting Leijonhufvud,

When a model fails, the conclusion nearest to hand is that some simplifying assumption or choice of empirical proxy-variable is to blame, not that one’s beliefs about the way the world works are wrong. So one looks around for a modified model that will be a better vehicle for the same theory.

Leijonhufvud (1997) – Models and Theories

A good example of this phenomenon comes from looking at the history of Keynesian theory over the last 50 years. When the Lucas critique essentially dismantled the Keynesian research program in the 1980s, economists who believed in Keynesian insights could have rejected that theory of the world. Instead, they adopted neoclassical *models *but tweaked them in such a way that produced results that were still consistent with Keynesian *theory*. New Keynesian models today include rational expectations and microfoundations. Many hours have been spent converting Keynes’s theory into fancy mathematical language, adding (sometimes questionable) frictions into the neoclassical framework to coax it into giving Keynesian results. But how does that help us understand the world? Have any of these advances increased our understanding of economics beyond what Keynes already knew 80 years ago? I have my doubts.

Macroeconomic research today works on the assumption that any good theory can be written as a mathematical model. Within that class of theories, the gold standard are those that are tractable enough to give analytical solutions. Restricting the set of acceptable models in this way probably helps throw out a lot of bad theories. What I worry is that it also precludes many good ones. By imposing such tight constraints on what a good macroeconomic theory is supposed to look like we also severely constrain our vision of a theoretical economy. Almost every macroeconomic paper draws from a tiny selection of functional forms for preferences and production because any deviation quickly adds too much complexity and leads to systems of equations that are impossible to solve.

But besides restricting the set of theories, speaking in models also often makes it difficult to understand the broader logic that drove the creation of the model. As Leijonhufvud puts it, “formalism in economics makes it possible to know precisely what someone is saying. But it often leaves us in doubt about what exactly he or she is talking about.” Macroeconomics has become very good at laying out a set of assumptions and deriving the implications of those assumptions. Where it often fails is in describing how we can compare that simplified world to the complex one we live in. What is the theory that underlies the assumptions that create an economic model? Where do the model and the theory differ and how might that affect the results? These questions seem to me to be incredibly important for conducting sound economic research. They are almost never asked.