## What’s Wrong With Modern Macro? Part 10 All Models are Wrong, Except When We Pretend They Are Right

Part 10 in a series of posts on modern macroeconomics. This post deals with a common defense of macroeconomic models that “all models are wrong.” In particular, I argue that attempts to match the model to the data contradict this principle.

In my discussion of rational expectations I talked about Milton Friedman’s defense of some of the more dubious assumptions of economic models. His argument was that even though real people probably don’t really think the way the agents in our economic models do, that doesn’t mean the model is a bad explanation of their behavior.

Friedman’s argument ties into a broader point about the realism of an economic model. We don’t necessarily want our model to perfectly match every feature that we see in reality. Modeling the complexity we see in real economies is a titanic task. In order to understand, we need to simplify.

Early DSGE models embraced this sentiment. Kydland and Prescott give a summary of their preferred procedure for doing macroeconomic economic research in a 1991 paper, “The Econometrics of the General Equilibrium Approach to Business Cycles.” They outline five steps. First, a clearly defined research question must be chosen. Given this question, the economist chooses a model economy that is well suited to answering it. Next, the parameters of the model are calibrated to fit based on some criteria (more on this below). Using the model, the researcher can conduct quantitative experiments. Finally, results are reported and discussed.

Throughout their discussion of this procedure, Kydland and Prescott are careful to emphasize that the true model will always remain out of reach, that “all model economies are abstractions and are by definition false.” And if all models are wrong, there is no reason we would expect any model to be able to match all features of the data. An implication of this fact is that we shouldn’t necessarily choose parameters of the model that give us the closest match between model and data. The alternative, which they offered alongside their introduction to the RBC framework, is calibration.

What is calibration? Unfortunately a precise definition doesn’t appear to exist and its use in academic papers has become somewhat slippery (and many times ends up meaning I chose these parameters because another paper used the same ones). But in reading Kydland and Prescott’s explanation, the essential idea is to find values for your parameters from data that is not directly related to the question at hand. In their words, “searching within some parametric class of economies for the one that best fits some set of aggregate time series makes little sense.” Instead, they attempt to choose parameters based on other data. They offer the example of the elasticity of substitution between labor and capital, or between labor and leisure. Both of these parameters can be obtained by looking at studies of human behavior. By measuring how individuals and firms respond to changes in real wages, we can get estimates for these values before looking at the results of the model.

The spirit of the calibration exercise is, I think, generally correct. I agree that economic models can never hope to capture the intricacies of a real economy and therefore if we don’t see some discrepancy between model and data we should be highly suspicious. Unfortunately, I also don’t see how calibration helps us to solve this problem. Two prominent econometricians, Lars Hansen and James Heckman, take issue with calibration. They argue

It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be “plugged into” a representative consumer model to produce an empirically concordant aggregate model…microeconomic technologies can look quite different from their aggregate counterparts.
Hansen and Heckman (1996) – The Empirical Foundations of Calibration

Although Kydland and Prescott and other proponents of the calibration approach wish to draw parameters from sources outside the model, it is impossible to entirely divorce the values from the model itself. The estimation of elasticity of substitution or labor’s share of income or any other parameter of the model is going to depend in large part on the lens through which these parameters are viewed and each model provides a different lens. To think that we can take an estimate from one environment and plug it into another would appear to be too strong an assumption.

There is also another more fundamental problem with calibration. Even if for some reason we believe we have the “correct” parameterization of our model, how can we judge its success? Thomas Sargent, in a 2005 interview, says that “after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.” Most people, when confronted with the realization that their model doesn’t match reality would conclude that their model was wrong. But of course we already knew that. Calibration was the solution, but what does it solve? If we don’t have a good way to test our theories, how can we separate the good from the bad? How can we ever trust the quantitative results of a model in which parameters are likely poorly estimated and which offers no criterion for which it can be rejected? Calibration seems only to claim to be an acceptable response to the idea that “all models are wrong” without actually providing a consistent framework for quantification.

But what is the alternative? Many macroeconomic papers now employ a mix of calibrated parameters and estimated parameters. A common strategy is to use econometric methods to search over all possible parameter values in order to find those that are most likely given the data. But in order to put any significance on the meaning of these parameters, we need to take our model as the truth or as close to the truth. If we estimate the parameters to match the data, we implicitly reject the “all models are wrong” philosophy, which I’m not sure is the right way forward.

And it gets worse. Since estimating a highly nonlinear macroeconomic system is almost always impossible given our current computational constraints, most macroeconomic papers instead linearize the model around a steady state. So even if the model is perfectly specified, we also need the additional assumption that we always stay close enough to this steady state that the linear approximation provides a decent representation of the true model. That assumption often fails. Christopher Carroll shows that a log linearized version of a simple model gives a misleading picture of consumer behavior when compared to the true nonlinear model and that even a second order approximation offers little improvement. This issue is especially pressing in a world where we know a clear nonlinearity is present in the form of the zero lower bound for nominal interest rates and once again we get “unpleasant properties” when we apply linear methods.

Anyone trying to do quantitative macroeconomics faces a rather unappealing choice. In order to make any statement about the magnitudes of economic shocks or policies, we need a method to discipline the numerical behavior of the model. If our ultimate goal is to explain the real world, it makes sense to want to use data to provide this discipline. And yet we know our model is wrong. We know that the data we see was not generated by anything close to the simple models that enable analytic (or numeric) tractability. We want to take the model seriously, but not too seriously. We want to use the data to discipline our model, but we don’t want to reject a model because it doesn’t fit all features of the data. How do we determine the right balance? I’m not sure there is a good answer.

## What’s Wrong With Modern Macro? Part 8 Rational Expectations Aren't so Rational

Part 8 in a series of posts on modern macroeconomics. This post continues the criticism of the underlying assumptions of most macroeconomic models. It focuses on the assumption of rational expectations, which was described briefly in Part 2. This post will summarize many of the points I made in a longer survey paper I wrote about expectations in macroeconomics that can be found here.

What economists imagine to be rational forecasting would be considered obviously irrational by anyone in the real world who is minimally rational
Roman Frydman (2013) – Rethinking Economics p. 148

### Why Rational Expectations?

Although expectations play a large role in many explanations of business cycles, inflation, and other macroeconomic data, modeling expectations has always been a challenge. Early models of expectations relied on backward looking adaptive expectations. In other words, people form their expectations by looking at past trends and extrapolating them forward. Such a process might seem plausible, but there is a substantial problem with using a purely adaptive formulation of expectations in an economic model.

For example, consider a firm that needs to choose how much of a good to produce before it learns the price. If it expects the price to be high, it will want to produce a lot, and vice versa. If we assume firms expect today’s price to be the same as tomorrow’s, they will consistently be wrong. When the price is low, they expect a low price and produce a small amount. But the low supply leads to a high price in equilibrium. A smart firm would see their errors and revise their expectations in order to profit. As Muth argued in his original defense of rational expectations, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” In equilibrium, these kinds of profit opportunities would be eliminated by intelligent entrepreneurs.

The solution proposed by Muth and popularized in macro by Lucas, was to simply assume that agents had the same model of the economy as the economist. Under rational expectations, an agent does not need to look at past data to make a forecast. Instead, their expectations are model based and forward looking. If an economist can detect consistent errors in an agent’s forecasting, rational expectations assumes that the agents themselves can also detect these errors and correct them.

### What Does the Data Tell Us?

The intuitive defense of rational expectations is appealing and certainly rational expectations marked an improvement over previous methods. But if previous models of expectations gave agents too little cognitive ability, rational expectations models give them far too much. Rational expectations leaves little room for disagreement between agents. Each one needs to instantly jump to the “correct” model of the economy (which happens to correspond to the one created by the economist) and assume every other agent has made the exact same jump. As Thomas Sargent put it, rational expectations leads to a “communism of models. All agents inside the model, the econometrician, and God share the same model.”

The problem, as Mankiw et al note, is that “the data easily reject this assumption. Anyone who has looked at survey data on expectations, either those of the general public or those of professional forecasters, can attest to the fact that disagreement is substantial.” Branch (2004) and Carroll (2003) offer further evidence that heterogeneous expectations play an important role in forecasting. Another possible explanation for disagreement in forecasts is that agents have access to different information. Even if each agent knew the correct model of the economy, having access to private information could lead to a different predictions. Mordecai Kurz has argued forcefully that disagreement does not stem from private information, but rather different interpretations of the same information.

Experimental evidence also points to heterogeneous expectations. In a series of “learning to forecast” experiments, Cars Hommes and coauthors have shown that when agents are asked to forecast the price of an asset generated by an unknown model, they appear to resort to simple heuristics that often differ substantially from the rational expectations forecast. Surveying the empirical evidence surrounding the assumptions of macroeconomics as a whole, John Duffy concludes “the evidence to date suggests that human subject behavior is often at odds with the standard micro-assumptions of macroeconomic models.”

### “As if” Isn’t Enough to Save Rational Expectations

In response to concerns about the assumptions of economics, Milton Friedman offered a powerful defense. He agreed that it was ridiculous to assume that agents make complex calculations when making economic decisions, but claimed that that is not at all an argument against assuming that they made decisions as if they knew this information. Famously, he gave the analogy of an expert billiard player. Nobody would ever believe that the player planned all of his shots using mathematical equations to determine the exact placement, and yet a physicist who assumed he did make those calculations could provide an excellent model of his behavior.

The same logic applies in economics. Agents who make forecasts using incorrect models will be driven out as they are outperformed by those with better models until only the rational expectations forecast remains. Except this argument only works if rational expectations is actually the best forecast. As soon as we admit that people use various models to forecast, there is no guarantee it will be. Even if some agents know the correct model, they cannot predict the path of economic variables unless they are sure that others are also using that model. Using rational expectations might be the best strategy when others are also using it, but if others are using incorrect models, it may be optimal for me to use an incorrect model as well. In game theory terms, rational expectations is a Nash Equilibrium, but not a dominant strategy (Guesnerie 2010).

Still, Friedman’s argument implies that we shouldn’t worry too much about the assumptions underlying our model as long as it provides predictions that help us understand the world. Rational expectations models also fail in this regard. Even after including many ad hoc fixes to standard models like wage and price stickiness, investment adjustment costs, inflation indexing, and additional shocks, DSGE models with rational expectations still provide weak forecasting ability, losing out to simpler reduced form vector autoregressions (more on this in future posts).

Despite these criticisms, we still need an answer to the Lucas Critique. We don’t want agents to ignore information that could help them to improve their forecasts. Rational expectations ensures they do not, but it is far too strong. Weaker assumptions on how agents use the information to learn and improve their forecasting model over time retain many of the desirable properties of rational expectations while dropping some of its less realistic ones. I will return to these formulations in a future post (but read the paper linked in the intro – and here again – if you’re interested).

## What’s Wrong With Modern Macro? Part 7 The Illusion of Microfoundations II: The Representative Agent

Part 7 in a series of posts on modern macroeconomics. Part 6 began a criticism of the form of “microfoundations” used in DSGE models. This post continues that critique by emphasizing the flaws in using a “representative agent.” This issue has been heavily scrutinized in the past and so this post primarily offers a synthesis of material found in articles from Alan Kirman and Kevin Hoover (1 and 2).

One of the key selling points of DSGE models is that they are supposedly derived from microfoundations. Since only individuals can act, all aggregates must necessarily be the result of the interactions of these individuals. Understanding the mechanisms that lie behind the aggregates therefore seems essential to understanding the movements in the aggregates themselves. Lucas’s attack on Keynesian economics was motivated by similar logic. We can observe relationships between aggregate variables, but if we don’t understand the individual behavior that drives these relationships, how do we know if they will hold up when the environment changes?

I agree that ignoring individual behavior is an enormous problem for macroeconomics. But DSGE models do little to solve this problem.

Ideally, microfoundations would mean modeling behavior at an individual level. Each individual would then make choices based on the current economic conditions and these choices would aggregate to macroeconomic variables. Unfortunately, putting enough agents to make this exercise interesting is challenging to do in a mathematical model. As a result, “microfoundations” in a DSGE model usually means assuming that the decisions of all individuals can be summarized by the decisions of a single “representative agent.” Coordination between agents, differences in preferences or beliefs, and even the act of trading that is the hallmark of a market economy are eliminated from the discussion entirely. Although we have some vague notion that these activities are going on in the background, the workings of the model are assumed to be represented by the actions of this single agent.

So our microfoundations actually end up looking a lot closer to an analysis of aggregates than an analysis of true individual behavior. As Kevin Hoover writes, they represent “only a simulacrum of microeconomics, since no agent in the economy really faces the decision problem they represent. Seen that way, representative-agent models are macroeconomic, not microfoundational, models, although macroeconomic models that are formulated subject to an arbitrary set of criteria.” Hoover pushes back against the defense that representative agent models are merely a step towards truly microfounded models, arguing that not only would full microfoundations be infeasible but also that many economists do take the representative agent seriously on its own terms, using it both for quantitative predictions and policy advice.

But is the use of the representative agent really a problem? Even if it fails on its promise to deliver true “microfoundations,” isn’t it still an improvement over neglecting optimizing behavior entirely? Possibly, but using a representative agent offers only a superficial manifestation of individual decision-making, opening the door for misinterpretation. Using a representative agent assumes that the decisions of one agent at a macro level would be made in the same way as the decisions of millions of agents at a micro level. The theoretical basis for this assumption is weak at best.

In a survey of the representative agent approach, Alan Kirman describes many of the problems that arise when many agents are aggregated into a single representative. First, he presents the theoretical results from Sonnenschein (1972), Debreu (1974), and Mantel (1976), which show that even with strong assumptions on the behavior of individual preferences, the equilibrium that results by adding up individual behavior is not necessarily stable or unique.

The problem runs even deeper. Even if we assume aggregation results in a nice stable equilibrium, worrying results begin to arise as soon as we start to do anything with that equilibrium. One of the primary reasons for developing a model in the first place is to see how it reacts to policy changes or other shocks. Using a representative agent to conduct such an analysis implicitly assumes that the new aggregate equilibrium will still correspond to decisions of individuals. Nothing guarantees that it will. Kirman gives a simple example of a two-person economy where the representative agent’s choice makes each individual worse off. The Lucas Critique then applies here just as strongly as it does for old Keynesian models. Despite the veneer of optimization and rational choice, a representative agent model still abstracts from individual behavior in potentially harmful ways.

Of course, macroeconomists have not entirely ignored these criticisms and models with heterogeneous agents have become increasing popular in recent work. However, keeping track of more than one agents makes it nearly impossible to achieve useful mathematical results. The general process for using heterogeneous agents in a DSGE model then is to first prove that these agents can be aggregated and summarized by a set of aggregate equations. Although beginning from heterogeneity and deriving aggregation explicitly helps to ensure that the problems outlined above do not arise, it still imposes severe restrictions on the types of heterogeneity allowed. It would be an extraordinary coincidence if the restrictions that enable mathematical tractability also happen to be the ones relevant for understanding reality.

We are left with two choices. Drop microfoundations or drop DSGE. The current DSGE framework only offers an illusion of microfoundations. It introduces optimizing behavior at an aggregate level, but has difficulty capturing many of the actions essential to the workings of the market economy at a micro level. It is not a first step to discovering a way to model true microfoundations because it is not a tool well-suited to analyzing the behavior of more than one person at a time. Future posts will explore some models that are.

## Kevin Malone Economics

In one my favorite episodes of the TV show The Office (A Benihana Christmas), dim-witted accountant Kevin Malone faces a choice between two competing office Christmas parties: the traditional Christmas party thrown by Angela, or Pam and Karen’s exciting new party. After weighing various factors (double fudge brownies…Angela), Kevin decides “I think I’ll go to Angela’s party, because that’s the party I know.” I can’t help but see parallels between Kevin’s reasoning and some of the recent discussions about the role of DSGE models in macroeconomics. It’s understandable. Many economists have poured years of their lives into this research agenda. Change is hard. But sticking with something simply because that’s the way it has always been done is not, in my opinion, a good way to approach research.

I was driven to write this post after reading a recent post by Roger Farmer, which responds to this article by Steve Keen, which is itself responding to Olivier Blanchard’s recent comments on the state of macroeconomics. Lost yet? Luckily you don’t really need to understand the  entire flow of the argument to get my point. Basically, Keen argues that the assumption of economic models that the system is always in equilibrium is poorly supported. He points to physics for comparison, where Edward Lorenz showed that the dynamics underlying weather systems were not random, but chaotic – fully deterministic, but still impossible to predict. Although his model had equilibria, they were inherently unstable, so the system continuously fluctuated between them in an unpredictable fashion.

Keen’s point is that economics could easily be a chaotic system as well. Is there any doubt that the interactions between millions of people that make up the economic system are at least as complex as the weather? Is it possible that, like the weather, the economy is never actually in equilibrium, but rather groping towards it (or, in the case of multiple equilibria, towards one of them)? Isn’t there at least some value in looking for alternatives to the DSGE paradigm?

Roger begins his reply to Keen by claiming, “we tried that 35 years ago and rejected it. Here’s why.” I was curious to read about why it was rejected, but he goes on to say that the results of these attempts to examine whether there is complexity and chaos in economics were that “we have no way of knowing given current data limitations.” Echoing this point, a survey of some of these tests by Barnett and Serletis concludes

We do not have the slightest idea of whether or not asset prices exhibit chaotic nonlinear dynamics produced from the nonlinear structure of the economy…While there have been many published tests for chaotic nonlinear dynamics, little agreement exists among economists about the correct conclusions.
Barnett and Serletis (2000) – Martingales, nonlinearity, and chaos

That doesn’t sound like rejection to me. We have two competing theories without a good way of determining which is more correct. So why should we rigidly stick to one? Best I can tell, the only answer is Kevin Malone Economics™. DSGE models are the way it has always been done, so unless there is some spectacular new evidence that an alternative does better, let’s just stay with what we know. I don’t buy it. The more sensible way forward in my opinion is to cultivate both approaches, to build up models using DSGE as well as alternatives until we have some way of choosing which is better. Some economists have taken this path and begun to explore alternatives, but they remain on the fringe and are largely ignored (I will have a post on some of these soon, but here is a preview). I think this is a huge mistake.

Since Roger is one of my favorite economists in large part because he is not afraid to challenge the orthodoxy, I was a bit surprised to read this argument from him. His entire career has been devoted to models with multiple equilibria and sunspot shocks. As far as I know (and I don’t know much so I could very well be wrong), these features are just as difficult to confirm empirically as is the existence of chaos. By spending an entire career developing his model, he has gotten it to a place where it can shed light on real issues (see his excellent new book, Prosperity for All), but its acceptance within the profession is still low. In his analysis of alternatives to standard DSGE models, Michael De Vroey writes:

Multiple equilibria is a great idea that many would like to adopt were it not for the daunting character of its implementation. Farmer’s work attests to this. Nonetheless, it is hard to resist the judgement that, for all its panache, it is based on a few, hard to swallow, coups de force…I regard Farmer’s work as the fruit of a solo exercise attesting to both the inventiveness of its performer and the plasticity of the neoclassical toolbox. Yet in the Preface of his book [How the Economy Works], he expressed a bigger ambition, “to overturn a way of thinking that has been established among macroeconomists for twenty years.” To this end, he will need to gather a following of economists working on enriching his model
De Vroey (2016) – A History of Macroeconomics from Keynes to Lucas and Beyond

In other words, De Vroey’s assessment of Roger’s work is quite similar to Roger’s own assessment of non-DSGE models: it’s not a bad idea and there’s possibly some potential there, but it’s not quite there yet. Just as I doubt Roger is ready to tear up his work and jump back in with traditional models (and he shouldn’t), neither should those looking for alternatives to DSGE give up simply because the attempts to this point haven’t found anything revolutionary.

I’m not saying all economists should abandon DSGE models. Maybe they are a simple and yet still adequate way of looking at the world. But maybe they’re not. Maybe there are alternatives that provide more insight into the way a complex economy works. Attempts to find these alternatives should not be met with skepticism. They shouldn’t have to drastically outperform current models in order to even be considered (especially considering current models aren’t doing so hot anyway). Of course there is no way any alternative model will be able to stand up to a research program that has gone through 40 years of revision and improvement (although it’s not clear how much improvement there has really been). The only way to find out if there are alternatives worth pursuing is to welcome and encourage researchers looking to expand the techniques of macroeconomics beyond equilibrium and beyond DSGE. If even a fraction of the brainpower currently being applied to DSGE models were shifted to looking for different methods, I am confident that a new framework would flourish and possibly come to stand beside or even replace DSGE models as the primary tool of macroeconomics.

At the end of the episode mentioned above, Kevin (along with everybody else in the office) ends up going to Pam and Karen’s party, which turns out to be way better than Angela’s. I can only hope macroeconomics continues to mirror that plot.

## What’s Wrong With Modern Macro? Part 6 The Illusion of Microfoundations I: The Aggregate Production Function

Part 6 in a series of posts on modern macroeconomics. Part 4 noted the issues with using TFP as a measure of technology shocks. Part 5 criticized the use of the HP filter. Although concerning, neither of these problems is necessarily insoluble. With a better measure of technology and a better filtering method, the core of the RBC model would survive. This post begins to tear down that core, starting with the aggregate production function.

In the light of the negative conclusions derived from the Cambridge debates and from the aggregation literature, one cannot help asking why [neoclassical macroeconomists] continue using aggregate production functions
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

Remember that one of the primary reasons DSGE models were able to emerge as the dominant macroeconomic framework was their supposed derivation from “microfoundations.” They aimed to explain aggregate phenomena, but stressed that these aggregates could only come from optimizing behavior at a micro level. The spirit of this idea seems like a step in the right direction.

In its most common implementations, however, “microfoundations” almost always fails to capture this spirit. The problem with trying to generate aggregate implications from individual decision-making is that keeping track of decisions from a diverse group of agents quickly becomes computationally and mathematically unmanageable. This reality led macroeconomists to make assumptions that allowed for easy aggregation. In particular, the millions of decision-makers throughout the economy were collapsed into a single representative agent that owns a single representative firm. Here I want to focus on the firm side. A future post will deal with the problems of the representative agent.

### An Aggregate Production Function

In the real world, producing any good is complicated. It not only requires an entrepreneur to have an idea for the production of a new good, but also the ability to implement that idea. It requires a proper assessment of the resources needed, the organization of a firm, hiring good workers and managers, obtaining investors and capital, properly estimating consumer demand and competition from other firms and countless other factors.

In macro models, production is simple. In many models, a single firm produces a single good, using labor and capital as its only two inputs. Of course we cannot expect the model to match many of the features of reality. It is supposed to simplify. That’s what makes it a model. But we also need to remember Einstein’s famous insight that everything should be made as simple as possible, but no simpler. Unfortunately I think we’ve gone way past that point.

### Can We Really Measure Capital?

In the broadest possible categorization, productive inputs are usually placed into three categories: land, labor, and capital. Although both land and labor vary widely in quality making it difficult to aggregate, at least there are clear choices for their units. Adding up all of the useable land area and all of the hours worked at least gives us a crude measure of the quantity of land and labor inputs.

But what can we use for capital? Capital goods are far more diverse than land or labor, varying in size, mobility, durability, and thousands of other factors. There is no obvious measure that can combine buildings, machines, desks, computers, and millions of other specialized pieces of capital equipment. The easiest choice is to use the value of the goods, but what is their value? The price at which they were bought? An estimate of the value of the production they will bring about in the future? And what units will this value be? Dollars? Labor hours needed to produce the good?

None of these seem like good options. As soon as we go to value, we are talking about units that depend on the structure of the economy itself. An hour of labor is an hour of labor no matter what the economy looks like. With capital it gets more complicated. The same computer has a completely different value in 1916 than it does in 2016 no matter which concept of value is employed. That value is probably related to its productive capacity, but the relationship is far from clear. If the value of a firm’s capital stock has increased, can it actually produce more than before?

This question was addressed in the 1950s and 60s by Joan Robinson. Here’s how she summed up the problem:

Moreover, the production function has been a powerful instrument of miseducation. The student of economic theory is taught to write O = f (L, C) where L is a quantity of labour, C a quantity of capital and O a rate of output of commodities.’ He is instructed to assume all workers alike, and to measure L in man-hours of labour; he is told something about the index-number problem involved in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units C is measured. Before ever he does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.
Joan Robinson (1953) – The Production Function and the Theory of Capital

60 years later and we’ve changed the O to a Y and the C to a K, but little else has changed. People have thought hard about how to measure capital and try to deal with the issues (Here is a 250 page document on the OECD’s methods for example), but the issue remains. We still take a diverse set of capital goods and try to fit them all under a common label. The so called “Cambridge Capital Controversy” was never truly resolved and neoclassical economists simply pushed on undeterred. For a good summary of the debate and its (lack of) resolution see this relatively non-technical paper.

### What Assumptions Allow for Aggregation?

Even if we assume that there is some unit that would allow capital to be aggregated, we still face problems when trying to use an aggregate production function. One of the leading researchers working to find the set of conditions that allows for aggregation in production has been Franklin Fisher. Giving a more rigorous treatment to the capital aggregation issue, Fisher shows that capital can only be aggregated if all firms share the same constant returns to scale, capital augmenting technical change production function (except for a capital efficiency coefficient). If you’re not an economist, that condition doesn’t make much sense, but know that it is incredibly restrictive.

The problem doesn’t get much better when we move away from capital and try to aggregate labor and output. Fisher shows that aggregation in these concepts is only possible when there is no specialization (everybody can do everything) and all firms have the ability to produce every good (the amount of each good can change). Other authors have derived different conditions for aggregation, but none of these appear to be any less restrictive.

Do any of these restrictions matter? Even if aggregation is not strictly possible, as long as the model was close enough there wouldn’t be a real criticism. Fisher (with co-author Jesus Felipe) surveys the many arguments against aggregate production functions and addresses many of these kinds of counterarguments, ultimately concluding

The aggregation problem and its consequences, and the impossibility of testing empirically the aggregate production function…are substantially more serious than a mere anomaly. Macroeconomists should pause before continuing to do applied work with no sound foundation and dedicate some time to studying other approaches to value, distribution, employment, growth, technical progress etc., in order to understand which questions can legitimately be posed to the empirical aggregate data.
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

As far as I know, these concerns have not been addressed in a serious way. The aggregate production function continues to be a cornerstone of macroeconomic models. If it is seriously flawed, almost all of the work done in the last forty years becomes suspect.

But don’t worry, the worst is still to come.

## What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.

A plot of real GDP in the United States since 1947 looks like this

Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this

By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with

The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.

Let’s go back one more time to these graphs

Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed

Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to

Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.

Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles. There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him. TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here. ## Was the Great Recession a Return to Trend? As I was putting together some graphs for my post on HP filtering (coming soon), I started playing around with some real GDP data. First, as many have done before, I fit a simple linear trend to the log of real GDP data since 1947 Plotting the data in this way, the Great Recession is clearly visible and seems to have caused a permanent reduction in real GDP as the economy does not seem to be returning to its previous trend. Taking the log of GDP is nice because it allows us to interpret changes as percentage increases. Therefore, the linear trend implies a constant percentage growth rate for the economy. But is there any reason we should expect the economy to grow at a constant rate over time? What if growth is not exponential at all, and instead grows at a slowly decreasing rate over time? Here’s what happens when we take a square root of real GDP instead of a natural log I have no theoretical justification for taking the square root of RGDP data, but it fits a linear trend remarkably well (slightly better than log data). And here the Great Recession is gone. Instead, we see a ten year period above trend from 1997-2007 followed by a sharp correction. If economic growth is following a quadratic growth pattern rather than an exponential one, it implies that the growth rate is decreasing over time rather than remaining constant. Here’s an illustration of what growth rates would look like in each scenario Fitting one trend line doesn’t necessarily mean anything, so to better test which growth pattern seemed more plausible, I decided to try to fit each trend through 1986 and then use those trends to predict current RGDP. Actual annualized RGDP in 2016 Q2 was$16.525 trillion. Assuming exponential growth and fitting the data through 1986 predicts a much higher value of $23.353 trillion. A quadratic trend understates actual growth, but comes much closer, with a prediction of$14.670 trillion. I then did the same experiment using data through 1996 and through 2006 and plotted the results

From these graphs, it appears that assuming quadratic growth, which implies a decreasing growth rate over time, would have done much better in predicting future GDP. Notice especially how close a GDP prediction would have been by simply extrapolating the quadratic trend forward by 10 years in 2006. Had the economy stayed on its 2006 trend through the present, RGDP in the last quarter would have been $16.583 trillion. It was actually$16.525 trillion for a difference of about $58 billion (off by 3 tenths of a percent). That is an astonishingly good prediction for a ten year ahead forecast (Just for fun, extending the present trend to 2026 Q2 predicts$20.200 trillion. Place your bets now).

Of course, this analysis doesn’t prove anything. It could just be a coincidence that the quadratic growth theory happens to fit better than the exponential. But I think it does show that we need to be careful in interpreting the Great Recession.

Using the above pictures, I can come up with two plausible stories. In the first, the Great Recession represents a significant deviation from trend. There was no fundamental reason why we needed to leave the path we were on in 2006 and we should do everything we can to try to get back to that path. Whether that comes through demand side monetary or fiscal stimulus or supply side reforms like cutting regulation and government intervention I will leave aside. But we should do something.

The other story is a bit more pessimistic. In this version, economic growth had been trending downwards for decades. We made a valiant effort to prop up growth on the strength of the internet in the late 1990s and housing in the early 2000s. But these gains were illusory, generated by unsustainable bubbles waiting to be popped rather than true economic progress. The recession, far from an unnecessary catastrophe, was an essential correction required for the economy to reorganize resources and adapt to the low growth reality. Enacting policies that attempt to restore previous growth rates will only fuel new bubbles to replace the old ones, paving the way for future recessions. Doing something will only make things worse.

Which story is more plausible? I wish I knew.

## What’s Wrong With Modern Macro? Part 4 How Did a "Measure of our Ignorance" Become the Cause of Business Cycles?

Part 4 in a series of posts on modern macroeconomics. Parts 1, 2, and 3 documented the revolution that transformed Keynesian economics into DSGE economics. This post begins my critique of that revolution, beginning with a discussion of its reliance on total factor productivity (TFP) shocks.

In my last post I mentioned that the real business cycle model implies technology shocks are the primary driver of business cycles. I didn’t, however, describe what these technology shocks actually are. To do that, I need to bring in the work of another Nobel Prize winning economist: Robert Solow.

### The Solow Residual and Total Factor Productivity

Previous posts in this series have focused on business cycles, which encompass one half of macroeconomic theory. The other half looks at economic growth over a longer period. In a pair of papers written in 1956 and 1957, Solow revolutionized growth theory. His work attempted to quantitatively decompose the sources of growth. Challenging the beliefs of many economists at the time, he concluded that changes in capital and labor were relatively unimportant. The remainder, which has since been called the “Solow Residual” was attributed to “total factor productivity” (TFP) and interpreted as changes in technology. Concurrent research by Moses Abramovitz confirmed Solow’s findings, giving TFP 90% of the credit for economic growth in the US from 1870 to 1950. In the RBC model, the size of technology shocks is found by subtracting the contributions of labor and capital from total output. What remains is TFP.

### “A Measure of Our Ignorance”

Because TFP is nothing more than a residual, it’s a bit of a stretch to call it technical change. As Abramovitz put it, it really is just “a measure of our ignorance,” capturing the leftover effects in reality that have not been captured by the simple production function assumed. He sums up the problem in a later paper summarizing his research

Standard growth accounting is based on the notion that the several proximate sources of growth that it identifies operate independently of one another. The implication of this assumption is that the contributions attributable to each can be added up. And if the contribution of every substantial source other than technological progress has been estimated, whatever of growth is left over – that is, not accounted for by the sum of the measured sources – is the presumptive contribution of technological progress
Moses Abramovitz (1993) – The Search for the Sources of Growth: Areas of Ignorance, Old and New p. 220

In other words, only if we have correctly specified the production function to include all of the factors that determine total output can we define what is left as technological progress. So what is this comprehensive production function that is supposed to encompass all of these factors? Often, it is simply
$Y=AK^\alpha L^{1-\alpha}$

Y represents total output in the economy. All labor hours, regardless of the skill of the worker or the type of work done, are stuffed into L. Similarly, K covers all types of capital, treating diverse capital goods as a single homogeneous blob. Everything that doesn’t fit into either of these becomes part of A. This simple function, known as the Cobb-Douglas function, is used in various economic applications. Empirically, the Cobb-Douglas function matches some important features in the data, notably the constant shares of income accrued to both capital and labor. Unfortunately, it also appears to fit any data that has this feature, as Anwar Shaikh humorously points out by fitting it to fake economic data that is made to spell out the word HUMBUG. Shaikh concludes that the fit of the Cobb-Douglas function is a mathematical trick rather than a proper description of the fundamentals of production. Is it close enough to consider its residual an accurate measure of technical change? I have some doubts.

There are also more fundamental problems with aggregate production functions that will need to wait for a later post.

### Does TFP Measure Technological Progress or Something Else?

The technical change interpretation of the Solow residual runs into serious trouble if there are other variables correlated with it that are not directly related to productivity, but that also affect output. Robert Hall tests three variables that possibly fit this criteria (military spending, oil prices, and the political party of the president), and finds that all three affect the residual, casting doubt on the technology interpretation.

Hall cites five possible reasons for the discrepancy between Solow’s interpretation and reality, but they are somewhat technical. If you are interested, take a look at the paper linked above. The main takeaway should be that the simple idealized production function is a bit (or a lot) too simple. It cuts out too many of the features of reality that are essential to the workings of a real economy.

### TFP is Nothing More Than a Noisy Measure of GDP

The criticisms of the production function above are concerning, but subject to debate. We cannot say for sure whether the Cobb-Douglas, constant returns to scale formulation is close enough to reality to be useful. But there is a more powerful reason to doubt the TFP series. In my last post, I put up these graphs that appear to show a close relationship between the RBC model and the data.

Here’s another graph from the same paper

This one is not coming from a model at all. It simply plots TFP as measured as the residual described above against output. And yet the fit is still pretty good. In fact, looking at correlations with all of the variables shows that TFP alone “explains” the data about as well as the full model

What’s going on here? Basically, the table above shows that the fit of the RBC model only works so well because TFP is already so close to the data. For all its talk of microfoundations and the importance of including the optimizing behavior of agents in a model, the simple RBC framework does little more than attempt to explain changes in GDP using a noisy measure of GDP itself.

Kevin Hoover and Kevin Salyer make this point in a revealing paper where they claim that TFP is nothing more than “colored noise.” To defend this claim, they construct a fake “Solow residual” by creating a false data series that shares similar statistical properties to the true data, but whose coefficients come from a random number generator rather than a production function. Constructed in this way, the new residual certainly does not have anything to do technology shocks, but feeding this false residual into the RBC model still provides an excellent fit to the data. Hoover and Salyer conclude

The relationship of actual output and model output cannot indicate that the model has captured a deep economic relationship; for there is no such relationship to capture. Rather, it shows that we are seeing a complicated version of a regression fallacy: output is regressed on a noisy version of itself, so it is no wonder that a significant relationship is found. That the model can establish such a relationship on simulated data demonstrates that it can do so with any data that are similar in the relevant dimensions. That it has done so for actual data hardly seems subject to further doubt.
Hoover and Salyer (1998) – Technology Shocks or Coloured Noise? Why real-business-cycle models cannot explain actual business cycles, p.316

Already the RBC framework appears to be on shaky ground, but I’m just getting started (my plan for this series seems to be constantly expanding – there’s even more wrong with macro than I originally thought). My next post will be a brief discussion of the filtering method used in many DSGE applications. I will follow that with an argument that the theoretical justification for using an aggregate production function (Cobb-Douglas or otherwise) is extremely weak. At some point I will also address rational expectations, the representative agent assumption, and why the newer DSGE models that attempt to fix some of the problems of the RBC model also fail.

## What’s Wrong with Modern Macro? Part 3 Real Business Cycle and the Birth of DSGE Models

Part 3 in a series of posts on modern macroeconomics. Part 1 looked at Keynesian economics and part 2 described the reasons for its death. In this post I will explain dynamic stochastic general equilibrium (DSGE) models, which began with the real business cycle (RBC) model introduced by Kydland and Prescott and have since become the dominant framework of modern macroeconomics.

“What I am going to describe for you is a revolution in macroeconomics, a transformation in methodology that has reshaped how we conduct our science.” That’s how Ed Prescott began his Nobel Prize lecture after being awarded the prize in 2004. While he could probably benefit from some of Hayek’s humility, it’s hard to deny the truth in the statement. Lucas and Friedman may have demonstrated the failures of Keynesian models, but it wasn’t until Kydland and Prescott that a viable alternative emerged. Their 1982 paper, “Time to Build and Aggregate Fluctuations,” took the ideas of microfoundations and rational expectations and applied them to a flexible model that allowed for quantitative assessment. In the years that followed, their work formed the foundation for almost all macroeconomic research.

### Real Business Cycle

The basic setup of a real business cycle (RBC) model is surprisingly simple. There is one firm that produces one good for consumption by one consumer. Production depends on two inputs, labor and capital, as well as the level of technology. The consumer chooses how much to work, how much to consume, and how much to save based on its preferences, the current wage, and interest rates. Their savings are added to the capital stock, which, combined with their choice of labor, determines how much the firm is able to produce.

There is no money, no government, no entrepreneurs. There is no unemployment (only optimal reductions in hours worked), no inflation (because there is no money), and no stock market (the one consumer owns the one firm). There are essentially none of the features that most economists before 1980 as well as non-economists today would consider critically important for the study of macroeconomics. So how are business cycles generated in an RBC model? Exclusively through shocks to the level of technology (if that seems strange it’s probably even worse than you expect – stay tuned for part 4). When consumers and firms see changes in the level of technology, their optimal choices change which then causes total output, the number of hours worked, and the level of consumption and investment to fluctuate as well. Somewhat shockingly, when the parameters are calibrated to match the data, this simple model does a good job capturing many of the features of measured business cycles. The following graphs (from Uhlig 2003) demonstrate a big reason for the influence of the RBC model.

Looking at those graphs, you might wonder why there is anything left for macroeconomists to do. Business cycles have been solved! However, as I will argue in part 4, the perceived closeness of model and data is largely an illusion. There are, in my opinion, fundamental issues with the RBC framework that render it essentially meaningless in terms of furthering our understanding of real business cycles.

### The Birth of Dynamic Stochastic General Equilibrium Models

Although many economists would point to the contribution of the RBC model in explaining business cycles on its own, most would agree that its greater significance came from the research agenda it inspired. Kydland and Prescott’s article was one of the first of what would come to be called Dynamic Stochastic General Equilibrium (DSGE) models. They are dynamic because they study how a system changes over time and stochastic because they introduce random shocks. General equilibrium refers to the fact that the agents in the model are constantly maximizing (consumers maximizing utility and firms maximizing profits) and markets always clear (prices are set such that supply and demand are equal in each market in all time periods).

Due in part to the criticisms I will outline in part 4, DSGE models have evolved from the simple RBC framework to include many of the features that were lost in the transition from Keynes to Lucas and Prescott. Much of the research agenda in the last 30 years has aimed to resurrect Keynes’s main insights in microfounded models using modern mathematical language. As a result, they have come to be known as “New Keynesian” models. Thanks to the flexibility of the DSGE setup, adding additional frictions like sticky prices and wages, government spending, and monetary policy was relatively simple and has enabled DSGE models to become sufficiently close to reality to be used as guides for policymakers. I will argue in future posts that despite this progress, even the most advanced NK models fall short both empirically and theoretically.

## Why Nominal GDP Targeting?

Probably the best example that blogging can have a significant influence on economic thinking has been Scott Sumner’s advocacy of nominal GDP (NGDP) targeting at his blog The Money Illusion (and more recently EconLog). Scott has almost singlehandedly transformed NGDP targeting from an obscure idea to one that is widely known and increasingly supported. Before getting to Scott’s proposals, let me give a very brief history of monetary policy in the United States over the last 30 years.

From 1979-1982, the influence of Milton Friedman led the Federal Reserve to attempt a policy of constant monetary growth. In other words, rather than attempt to react to economic fluctuations, the central bank would simply increase the money supply by the same rate each year. Friedman’s so called “k-percent” rule failed to promote economic stability, but the principle behind it continued to influence policy. In particular, it ushered in an era of rules rather than discretion. Under Alan Greenspan, the Fed adjusted policy to changes in inflation and output in a formulaic fashion. Studying these policies, John Taylor developed a simple mathematical equation (now called the “Taylor Rule”) that accurately predicted Fed behavior based on the Fed’s responses to macro variables. The stability of the economy from 1982-2007 led to the period being called “the Great Moderation” and Greenspan dubbed “the Maestro.” Monetary policy appeared to have been solved. Of course, everybody knows what happened next.

On Milton Friedman’s 90th birthday, former Federal Reserve chairman Ben Bernanke gave a speech where, on the topic of the Great Depression, he said to Friedman, “You’re right, we did it. We’re very sorry. But thanks to you, we won’t do it again.” According to Scott Sumner, however, that’s exactly what happened. Many explanations have been given for the Great Recession, but Scott believes most of the blame can be placed on the Fed for failing to keep nominal GDP on trend.

To defend his NGDP targeting proposal, Scott outlines a simple “musical chairs” model of unemployment. He describes that model here and here. Essentially the idea is that the total number of jobs in the economy can be approximated by dividing NGDP by the average wage (which would be exactly true if we replace NGDP by labor income, but they are highly correlated). If NGDP falls and wages are slow to adjust, the total number of jobs must decrease, leaving some workers unemployed (like taking away chairs in musical chairs). Roger Farmer has demonstrated a close connection between NGDP/W and unemployment in the data, which appears to support this intuition

We can also think about the differences in a central bank’s response to shocks under inflation and NGDP targeting. Imagine a negative supply shock (for example an oil shock) hits the economy. In a simple AS-AD model, this shock would be expected to increase prices and reduce output. Under an inflation targeting policy regime, the central bank would only see the higher prices and tighten monetary policy, deepening the drop in output even more. However, since output and inflation move in opposite directions, the change in NGDP (which is just the price level times output), would be much lower, meaning the central bank would respond less to a supply shock. Conversely, a negative demand shock would cause both prices and output to drop leading to a loosening of policy under both inflation and NGDP targeting. In other words, the central bank would respond strongly to demand shocks, while allowing the market to deal with the effects of supply shocks. Since a supply shock actually reduces the productive capacity of an economy, we should not expect a central bank to be able to effectively combat supply shocks. An NGDP target ensures that they will not try to.

Nick Rowe offers another simple defense of NGDP targeting. He shows that Canadian inflation data gives no indication of any changes occurring in 2008. Following a 2% inflation target, it looks like the central bank did everything it should to keep the economy on track

Looking at a graph of NGDP makes the recession much more obvious. Had the central bank instead been following a policy of targeting NGDP, they would have needed to be much more active to stay on target. Nick therefore calls inflation “the dog that didn’t bark.” Since inflation failed to warn us a recession was imminent, perhaps NGDP is a better indicator of monetary tightness.

Of course, we also need to think about whether a central bank would even be able to hit its NGDP target if it decided to adopt one. Scott has a very interesting idea on the best way to stay on target. I will soon have another post detailing that proposal.