*Part 10 in a series of posts on modern macroeconomics. This post deals with a common defense of macroeconomic models that “all models are wrong.” In particular, I argue that attempts to match the model to the data contradict this principle.*

In my discussion of rational expectations I talked about Milton Friedman’s defense of some of the more dubious assumptions of economic models. His argument was that even though real people probably don’t really think the way the agents in our economic models do, that doesn’t mean the model is a bad explanation of their behavior.

Friedman’s argument ties into a broader point about the realism of an economic model. We don’t necessarily want our model to perfectly match every feature that we see in reality. Modeling the complexity we see in real economies is a titanic task. In order to understand, we need to simplify.

Early DSGE models embraced this sentiment. Kydland and Prescott give a summary of their preferred procedure for doing macroeconomic economic research in a 1991 paper, “The Econometrics of the General Equilibrium Approach to Business Cycles.” They outline five steps. First, a clearly defined research question must be chosen. Given this question, the economist chooses a model economy that is well suited to answering it. Next, the parameters of the model are calibrated to fit based on some criteria (more on this below). Using the model, the researcher can conduct quantitative experiments. Finally, results are reported and discussed.

Throughout their discussion of this procedure, Kydland and Prescott are careful to emphasize that the true model will always remain out of reach, that “all model economies are abstractions and are by definition false.” And if all models are wrong, there is no reason we would expect any model to be able to match all features of the data. An implication of this fact is that we shouldn’t necessarily choose parameters of the model that give us the closest match between model and data. The alternative, which they offered alongside their introduction to the RBC framework, is calibration.

What is calibration? Unfortunately a precise definition doesn’t appear to exist and its use in academic papers has become somewhat slippery (and many times ends up meaning I chose these parameters because another paper used the same ones). But in reading Kydland and Prescott’s explanation, the essential idea is to find values for your parameters from data that is not directly related to the question at hand. In their words, “searching within some parametric class of economies for the one that best fits some set of aggregate time series makes little sense.” Instead, they attempt to choose parameters based on other data. They offer the example of the elasticity of substitution between labor and capital, or between labor and leisure. Both of these parameters can be obtained by looking at studies of human behavior. By measuring how individuals and firms respond to changes in real wages, we can get estimates for these values before looking at the results of the model.

The spirit of the calibration exercise is, I think, generally correct. I agree that economic models can never hope to capture the intricacies of a real economy and therefore if we don’t see some discrepancy between model and data we should be highly suspicious. Unfortunately, I also don’t see how calibration helps us to solve this problem. Two prominent econometricians, Lars Hansen and James Heckman, take issue with calibration. They argue

It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be “plugged into” a representative consumer model to produce an empirically concordant aggregate model…microeconomic technologies can look quite different from their aggregate counterparts.

Hansen and Heckman (1996) – The Empirical Foundations of Calibration

Although Kydland and Prescott and other proponents of the calibration approach wish to draw parameters from sources outside the model, it is impossible to entirely divorce the values from the model itself. The estimation of elasticity of substitution or labor’s share of income or any other parameter of the model is going to depend in large part on the lens through which these parameters are viewed and each model provides a different lens. To think that we can take an estimate from one environment and plug it into another would appear to be too strong an assumption.

There is also another more fundamental problem with calibration. Even if for some reason we believe we have the “correct” parameterization of our model, how can we judge its success? Thomas Sargent, in a 2005 interview, says that “after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.” Most people, when confronted with the realization that their model doesn’t match reality would conclude that their model was wrong. But of course we already knew that. Calibration was the solution, but what does it solve? If we don’t have a good way to test our theories, how can we separate the good from the bad? How can we ever trust the quantitative results of a model in which parameters are likely poorly estimated and which offers no criterion for which it can be rejected? Calibration seems only to claim to be an acceptable response to the idea that “all models are wrong” without actually providing a consistent framework for quantification.

But what is the alternative? Many macroeconomic papers now employ a mix of calibrated parameters and estimated parameters. A common strategy is to use econometric methods to search over all possible parameter values in order to find those that are most likely given the data. But in order to put any significance on the meaning of these parameters, we need to take our model as the truth or as close to the truth. If we estimate the parameters to match the data, we implicitly reject the “all models are wrong” philosophy, which I’m not sure is the right way forward.

And it gets worse. Since estimating a highly nonlinear macroeconomic system is almost always impossible given our current computational constraints, most macroeconomic papers instead linearize the model around a steady state. So even if the model is perfectly specified, we also need the additional assumption that we always stay close enough to this steady state that the linear approximation provides a decent representation of the true model. That assumption often fails. Christopher Carroll shows that a log linearized version of a simple model gives a misleading picture of consumer behavior when compared to the true nonlinear model and that even a second order approximation offers little improvement. This issue is especially pressing in a world where we know a clear nonlinearity is present in the form of the zero lower bound for nominal interest rates and once again we get “unpleasant properties” when we apply linear methods.

Anyone trying to do quantitative macroeconomics faces a rather unappealing choice. In order to make any statement about the magnitudes of economic shocks or policies, we need a method to discipline the numerical behavior of the model. If our ultimate goal is to explain the real world, it makes sense to want to use data to provide this discipline. And yet we know our model is wrong. We know that the data we see was *not *generated by anything close to the simple models that enable analytic (or numeric) tractability. We want to take the model seriously, but not *too *seriously. We want to use the data to discipline our model, but we don’t want to reject a model because it doesn’t fit *all *features of the data. How do we determine the right balance? I’m not sure there is a good answer.