What’s Wrong With Modern Macro? Part 15 Where Do We Go From Here?

I’ve spent 14 posts telling you what’s wrong with modern macro. It’s about time for something positive. Here I hope to give a brief outline of what my ideal future of macro would look like. I will look at four current areas of research in macroeconomics outside the mainstream (some more developed than others) that I think offer a better way to do research than currently accepted methods. I will expand upon each of these in later posts.


Learning and Heterogeneous Expectations

Let’s start with the smallest deviation from current research. In Part 8 I argued that assuming rational expectations, which means that agents in the model form expectations based on a correct understanding of the environment they live in, is far too strong an assumption. To deal with that criticism, we don’t even need to leave the world of DSGE. A number of macroeconomists have explored models where agents are required to learn about how important macroeconomic variables move over time.

These kinds of models generally come in two flavors. First, the econometric learning models summarized in Evans and Honkapohja’s 2001 book, Learning and Expectations in Macroeconomics, which assume that agents in the model are no smarter than the economists that create them. They must therefore use the same econometric techniques to estimate parameters that economists do. Another approach assumes even less about the intelligence of agents by only allowing them to use simple heuristics for prediction. Based on the framework of Brock and Hommes (1997), these heuristic switching models allow agents to hold heterogeneous expectations in equilibrium, an outcome that is difficult to achieve with rational expectations, but prevalent in reality. A longer post will look at these types of models in more detail soon.

Experiments

Most macroeconomic research is based on the same set of historical economic variables. There are probably more papers about the history of US macroeconomics than there are data points. Even if we include all of the countries that provide reliable economic data, that doesn’t leave us with a lot of variation to exploit. In physics or chemistry, an experiment can be run hundreds or thousands of times. In economics, we can only observe one run.

One possible solution is to design controlled experiments aimed to answer macroeconomic questions. The obvious objection to such an idea is that a lab with a few dozen people interacting can never hope to capture the complexities of a real economy. That criticism makes sense until you consider that many accepted models only have one agent. Realism has never been the strong point of macroeconomics. Experiments of course won’t be perfect, but are they worse than what we have now? John Duffy gives a nice survey of some of the recent advances in experimental macroeconomics here, which I will discuss in a future post as well.

Agent Based Models

Perhaps the most promising alternative to DSGE macro models, an agent based model (ABM) attempts to simulate an economy from the ground up inside a computer. In particular, an ABM begins with a group of agents that generally follow a set of simple rules. The computer then simulates the economy by letting these agents interact according to the provided rules. Macroeconomic results are obtained by simply adding the outcomes of individuals.

I will give examples of more ABMs in future posts, but one I really like is a 2000 paper by Peter Howitt and Robert Clower. In their paper they begin with a decentralized economy that consists of shops that only trade two commodities each. Under a wide range of assumptions, they show that in most simulations of an economy, one of the commodities will become traded at nearly every shop. In other words, one commodity become money. Even more interesting, agents in the model coordinate to exploit gains from trade without needing the assumption of a Walrasian Auctioneer to clear the market. Their simple framework has since been expanded to a full fledged model of the economy.

Empirical Macroeconomics

If you are familiar with macroeconomic research, it might seem odd that I put empirical macroeconomics as an alternative path forward. It is almost essential for every macroeconomic paper today to have some kind of empirical component. However, the kind of empirical exercises performed in most macroeconomic papers don’t seem very useful to me. They focus on estimating parameters in order to force models that look nothing like reality to nevertheless match key moments in real data. In part 10 I explained why that approach doesn’t make sense to me.

In 1991, Larry Summers wrote a paper called “The Scientific Illusion in Empirical Macroeconomics” where he distinguishes between formal econometric testing of models and more practical econometric work. He argues that economic work like Friedman and Schwartz’s A Monetary History of the United States, despite eschewing formal modeling and using a narrative approach, contributed much more to our understanding of the effects of monetary policy than any theoretical study. Again, I will save a longer discussion for a future post, but I agree that macroeconomic research should embrace practical empirical work rather than its current focus on theory.


The future of macro should be grounded in diversity. DSGE has had a good run. It has captivated a generation of economists with its simple but flexible setup and ability to provide answers to a great variety of economic questions. Perhaps it should remain a prominent pillar in the foundation of macroeconomic research. But it shouldn’t be the only pillar. Questioning the assumptions that lie at the heart of current models – rational expectations, TFP shocks, Walrasian general equilibrium – should be encouraged. Alternative modeling techniques like agent based modeling should not be pushed to the fringes, but welcomed to the forefront of the research frontier.

Macroeconomics is too important to ignore. What causes business cycles? How can we sustain strong economic growth? Why do we see periods of persistent unemployment, or high inflation? Which government or central bank policies will lead to optimal outcomes? I study macroeconomics because I want to help answer these questions. Much of modern macroeconomics seems to find its motivation instead in writing fancy mathematical models. There are other approaches – let’s set them free.

What’s Wrong With Modern Macro? Part 14 A Pretense of Knowledge in Macroeconomics

Part 14 in a series of posts on modern macroeconomics. This post concludes my criticisms of the current state of macroeconomic research by tying all of my previous posts to Hayek’s warning about the pretense of knowledge in economics


Russ Roberts, host of the excellent EconTalk podcast, likes to say that you know macroeconomists have a sense of humor because they use decimal points. His implication is that for an economist to believe that they can predict growth or inflation or any other variable down to a tenth of a percentage point is absolutely ridiculous. And yet economists do it all the time. Whether it’s the CBO analysis of a policy change or the counterfactuals of an academic macroeconomics paper, quantitative analysis has become an almost necessary component of economic research. There’s nothing inherently wrong with this philosophy. Making accurate quantitative predictions would be an incredibly useful role for economics.

Except for one little problem. We’re absolutely terrible at it. Prediction in economics is really hard and we are nowhere near the level of understanding that would allow for our models to deliver accurate quantitative results. Everybody in the profession seems to realize this fact, which means nobody believes many of the results their theories produce.

Let me give two examples. The first, which I have already mentioned in previous posts, is Lucas’s result that the cost of business cycles in theoretical models is tiny – around one tenth of one percent of welfare loss compared to an economy with no fluctuations. Another is the well known puzzle in international economics that trade models have great difficulty producing large gains from trade. A seminal paper by Arkolakis, Costinot, and Rodriguez-Clare evaluates how our understanding of the gains from trade have improved from the theoretical advances of the last 10 years. Their conclusion: “so far, not much.”

How large are the costs of business cycles? Essentially zero. How big are the gains from trade? Too small to matter much. That’s what our theories tell us. Anybody who has lived in this world the last 30 years should be able to immediately take away some useful information from these results: the theories stink. We just lived through the devastation of a large recession. We’ve seen globalization lift millions out of poverty. Nobody believes for a second that business cycles and trade don’t matter. In fact, I’m not sure anybody takes the quantitative results of any macro paper seriously. For some reason we keep writing them anyway.

In the speech that provided the inspiration for the title of this blog, Hayek warned economists of the dangers of the pretense of knowledge. Unlike the physical sciences, where controlled laboratory experiments are possible and therefore quantitative predictions might be feasible, economics is a complex science. He argues,

The difficulties which we encounter in [complex sciences] are not, as one might at first suspect, difficulties about formulating theories for the explanation of the observed events – although they cause also special difficulties about testing proposed explanations and therefore about eliminating bad theories. They are due to the chief problem which arises when we apply our theories to any particular situation in the real world. A theory of essentially complex phenomena must refer to a large number of particular facts; and to derive a prediction from it, or to test it, we have to ascertain all these particular facts. Once we succeeded in this there should be no particular difficulty about deriving testable predictions – with the help of modern computers it should be easy enough to insert these data into the appropriate blanks of the theoretical formulae and to derive a prediction. The real difficulty, to the solution of which science has little to contribute, and which is sometimes indeed insoluble, consists in the ascertainment of the particular facts
Hayek (1989) – The Pretence of Knowledge

In other words, it’s not that developing models to explain economic phenomena is especially challenging, but rather that there is no way to collect sufficient information to apply any theory quantitatively. Preferences, expectations, technology. Any good macroeconomic theory would need to include each of these features, but each is almost impossible to measure.

Instead of admitting ignorance, we make assumptions. Preferences are all the same. Expectations are all rational. Production technologies take only a few inputs and outputs. Fluctuations are driven by a single abstract technology shock. Everyone recognizes that any realistic representation of these features would require knowledge far beyond what is available to any single mind. Hayek saw that this difficulty placed clear restrictions on what economists could do. We can admit that every model needs simplification while also remembering that those simplifications constrain the ability of the model to connect to reality. The default position of modern macroeconomics instead seems to be to pretend the constraints don’t exist.

Many economists would probably agree with many of the points I have made in this series, but it seems that most believe the issues have already been solved. There are a lot of models out there and some of them do attempt to deal directly with some of the problems I have identified. There are models that try to reduce the importance of TFP as a driver of business cycles. There are models that don’t use the HP-Filter, models that have heterogeneous agents, models that introduce financial frictions and other realistic features absent from the baseline models. For any flaw in one model, there is almost certainly another model that attempts to solve it. But in solving that single problem they likely introduce about ten more. Other papers will deal with those problems, but maybe they forget about the original problem. For each problem that arises, we just introduce a new model. And then we take those issues as solved even though they are solved by a set of models that potentially produce conflicting results and with no real way to differentiate which is more useful.

Almost every criticism I have written about in the last 13 posts of this series can be traced back to the same source. Macroeconomists try to do too much. They haven’t heeded Hayek’s plea for humility. Despite incredible simplifying assumptions, they take their models to real data and attempt to make predictions about a world that bears only a superficial resemblance to the model used to represent it. Trying to answer the big questions about macroeconomics with such a limited toolset is like trying to build a skyscraper with only a hammer.

What’s Wrong With Modern Macro? Part 13 No Other Game in Town

Part 13 in a series of posts on modern macroeconomics. Previous posts in this series have pointed out many problems with DSGE models. This post aims to show that these problems have either been (in my opinion wrongly) dismissed or ignored by most of the profession. 


“If you have an interesting and coherent story to tell, you can tell it in a DSGE model. If you cannot, your story is incoherent.”

The above quote comes from a testimony given by prominent Minnesota macroeconomist V.V. Chari to the House of Representatives in 2010 as they attempted to determine how economists could have missed an event as large as the Great Recession. Chari argues that although macroeconomics does have room to improve, it has made substantial progress in the last 30 years and there is nothing fundamentally wrong with its current path.

Of course, not everybody has been so kind to macroeconomics. Even before the crisis, prominent macroeconomists had begun voicing some concerns about the current state of macroeconomic research. Here’s Robert Solow in 2003:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. (I mention Pigou to disabuse everyone of the notion that this is some specifically Keynesian thing.) Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual–and, to some extent, market–behavior. But some formalization of the connection was a good idea. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?
Solow (2003) – Dumb and Dumber in Macroeconomics

Olivier Blanchard in 2008:

There is, however, such a thing as too much convergence. To caricature, but only slightly: A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, firms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.

Such articles can be great, and the best ones indeed are. But, more often than not, they suffer from some of the flaws I just discussed in the context of DSGEs: Introduction of an additional ingredient in a benchmark model already loaded with questionable assumptions. And little or no independent validation for the added ingredient.
Olivier Blanchard (2008) – The State of Macro

And more recently, the vitriolic critique of Paul Romer:

Would you want your child to be treated by a doctor who is more committed to his friend the anti-vaxer and his other friend the homeopath than to medical science? If not, why should you expect that people who want answers will keep paying attention to economists after they learn that we are more committed to friends than facts.
Romer (2016) – The Trouble with Macroeconomics

These aren’t empty critiques by no-name graduate student bloggers. Robert Solow and Paul Romer are two of the biggest names in growth theory in the last 50 years. Olivier Blanchard was the chief economist at the IMF. One would expect that these criticisms would cause the profession to at least strongly consider a re-evaluation of its methods. Looking at the recent trends in the literature as well as my own experience doing macroeconomic research, it hasn’t. Not even close. Instead, it seems to have doubled down on the DSGE paradigm, falling much closer to Chari’s point of view that “there is no other game in town.”

But it’s even worse than that. Taken at its broadest, sticking to DSGE models is not too restrictive. Dynamic simply means that the models have a forward looking component, which is obviously an important feature to include in a macroeconomic model. Stochastic means there should be some randomness, which again is probably a useful feature (although I do think deterministic models can be helpful as well – more on this later). General equilibrium is a little harder to swallow, but it still provides a good measure of flexibility.

Even within the DSGE framework, however, straying too far from the accepted doctrines of macroeconomics is out of the question. Want to write a paper that deviates from rational expectations? You better have a really good reason. Never mind that survey and experimental evidence shows large differences in how people form expectations, or the questionable assumption that everybody in the model knows the model and takes it as truth, or that the founder of rational expectations John Muth later said:

It is a little surprising that serious alternatives to rational expectations have never been proposed. My original paper was largely a reaction against very naive expectations hypotheses juxtaposed with highly rational decision-making behavior and seems to have been rather widely misinterpreted.
Quoted in Hoover (2013) – Rational Expectations: Retrospect and Prospect

Using an incredibly strong assumption like rational expectations is accepted without question. Any deviation requires explanation. Why? As far as I can tell, it’s just Kevin Malone economics: that’s the way it has always been done and nobody has any incentive to change it. And so researchers all get funneled into making tiny changes to existing frameworks – anything truly new is actively discouraged.

Now, of course, there are reasons to be wary of change. It would be a waste to completely ignore the path that brought us to this point and, despite their flaws, I’m sure there have been some important insights generated by the DSGE research program (although to be honest I’m having trouble thinking of any). Maybe it really is the best we can do. But wouldn’t it at least be worth devoting some time to alternatives? I would estimate that 99.99% of theoretical business cycle papers published in top 5 journals are based around DSGE models. Chari wasn’t exaggerating when he said there is no other game in town.

Wouldn’t it be nice if there were? Is there any reason other than tradition that every single macroeconomics paper needs to follow the exact same structure, to draw from the same set restrictive set of assumptions? If even 10% of the time and resources devoted to tweaking existing models was instead spent on coming up with entirely new ways of doing macroeconomic research I have no doubt that a viable alternative could be found. And there already are some (again, more on this later), but they exist only on the fringes of the profession. I think it’s about time for that to change.

What’s Wrong With Modern Macro? Part 11 Building on a Broken Foundation

Part 11 in a series of posts on modern macroeconomics. The discussion to this point has focused on the early contributions to DSGE modeling by Kydland and Prescott. Obviously, in the last 30 years, economic research has advanced beyond their work. In this post I will argue that it has advanced in the wrong direction.


In Part 3 of this series I outlined the Real Business Cycle (RBC) model that began macroeconomics on its current path. The rest of the posts in this series have offered a variety of reasons why the RBC framework fails to provide an adequate model of the economy. An easy response to this criticism is that it was never intended to be a final answer to macroeconomic modeling. Even if Kydland and Prescott’s model misses important features of a real economy doesn’t it still provide a foundation to build upon? Since the 1980s, the primary agenda of macroeconomic research has been to build on this base, to relax some of the more unrealistic assumptions that drove the original RBC model and to add new assumptions to better match economic patterns seen in the data.

New Keynesian Models

Part of the appeal of the RBC model was its simplicity. Driving business cycles was a single shock: exogenous changes in TFP attributed to changes in technology. However, the real world is anything but simple. In many ways, the advances in the methods used by macroeconomists in the 1980s came only at the expense of a rich understanding of the complex interactions of a market economy. Compared to Keynes’s analysis 50 years earlier, the DSGE models of the 1980s seem a bit limited.

What Keynes lacked was clarity. 80 years after the publication of the General Theory, economists still debate “what Keynes really meant.” For all of their faults, DSGE models at the very least guarantee clear exposition (assuming you understand the math) and internal consistency. New Keynesian (NK) models attempt to combine these advantages with some of the insights of Keynes.

The simplest distinction between an RBC model and an NK model is the assumptions made about the flexibility of prices. In early DSGE models, prices (including wages) were assumed to be fully flexible, constantly adjusting in order to ensure that all markets cleared. The problem (or benefit depending on your point of view) with perfectly flexible prices is that it makes it difficult for policy to have any impact. Combining rational expectations with flexible prices means monetary shocks cannot have any real effects. Prices and wages simply adjust immediately to return to the same allocation of real output.

To reintroduce a role for policy, New Keynesians instead assume that prices are sticky. Now when a monetary shock hits the economy firms are unable to change their price so they respond to the higher demand by increasing real output. As research in the New Keynesian program developed, more and more was added to the simple RBC base. The first RBC model was entirely frictionless and included only one shock (to TFP). It could be summarized in just a few equations. A more recent macro model, The widely cited Smets and Wouters (2007), adds frictions like investment adjustment costs, variable capital utilization, habit formation, and inflation indexing in addition to 7 structural shocks, leading to dozens of equilibrium equations.

Building on a Broken Foundation

New Keynesian economists have their heart in the right place. It’s absolutely true that the RBC model produces results at odds with much of reality. Attempting to move the model closer to the data is certainly an admirable goal. But, as I have argued in the first 10 posts of this series, the RBC model fails not so much because it abstracts from the features we see in real economies, but because the assumptions it makes are not well suited to explain economic interactions in the first place.

New Keynesian models do almost nothing to address the issues I have raised. They still rely on aggregating firms and consumers into representative entities that offer only the illusion of microfoundations rather than truly deriving aggregate quantities from realistic human behavior. They still assume rational expectations, meaning every agent believes that the New Keynesian model holds and knows that every other agent also believes it holds when they make their decisions. And they still make quantitative policy proposals by assuming a planner that knows the entire economic environment.

One potential improvement of NK models over their RBC counterparts is less reliance on (potentially misspecified) technology shocks as the primary driver of business cycle fluctuations. The “improvement,” however, comes only by introducing other equally suspect shocks. A paper by V.V. Chari, Patrick Kehoe, and Ellen McGrattan argues that four of the “structural shocks” in the Smets-Wouters setup do not have a clear interpretation. In other words, although the shocks can help the model match the data, the reason why they are able to do so is not entirely clear. For example, one of the shocks in the model is a wage markup over the marginal productivity of a worker. The authors argue that this could be caused either by increased bargaining power (through unions, etc.), which would call for government intervention to break up unions, or through an increased preference for leisure, which is an efficient outcome and requires no policy. The Smets-Wouters model can say nothing about which interpretation is more accurate and is therefore unhelpful in prescribing policy to improve the economy.

Questionable Microfoundations

I mentioned above that the main goal of the New Keynesian model was to revisit the features Keynes talked about decades ago using modern methods. In today’s macro landscape, microfoundations have become essential. Keynes spoke in terms of broad aggregates, but without understanding how those aggregates are derived from optimizing behavior at a micro level they fall into the Lucas critique. But although the NK model attempts to put in more realistic microfoundations, many of its assumptions seem at odds with the ways in which individuals actually act.

Take the assumption of sticky prices. It’s probably a valid assumption to include in an economic model. With no frictions, an optimizing firm would want to change its price every time new economic data became available. Obviously that doesn’t happen as many goods stay at a constant price for months or years. So of course if we want to match the data we are going to need some reason for firms to hold prices constant. The most common way to achieve this result is called “Calvo Pricing” in honor of a paper by Guillermo Calvo. And how do we ensure that some prices in the economy remain sticky? We simply assign each firm some probability that they will be unable to change their price. In other words, we assume exactly the result we want to get.

Of course, economists have recognized that Calvo pricing isn’t really a satisfactory final answer, so they have worked to microfound Calvo’s idea through an optimization problem. The most common method, proposed by Rotemberg in 1982, is called the menu cost model. In this model of price setting, a firm must pay some cost every time it changes its prices. This cost ensures that a firm will only change its price when it is especially profitable to do so. Small price changes every day no longer make sense. The term menu cost comes from the example of a firm needing to print out new menus every time it changes one of its prices, but it can be applied more broadly to encompass every cost a firm might incur by changing a price. More specifically, price changes could have an adverse effect on consumers, causing some to leave the store. It could require more advanced computer systems, more complex interactions with suppliers, competitive games between firms.

But by wrapping all of these into one “menu cost,” am I really describing a firm’s problem. A “true” microfoundation would explicitly model each of these features at a firm level with different speeds of price adjustment and different reasons causing firms to hold prices steady. What do we gain by adding optimization to our models if that optimization has little to do with the problem a real firm would face? Are we any better off with “fake” microfoundations than we were with statistical aggregates. I can’t help but think that the “microfoundations” in modern macro models are simply fancy ways of finagling optimization problems until we get the result we want. Interesting mathematical exercise? Maybe. Improving our understanding of the economy? Unclear.

Other additions to the NK model that purport to move the model closer to reality are also suspect. As Olivier Blanchard describes in his 2008 essay, “The State of Macro:”

Reconciling the theory with the data has led to a lot of un- convincing reverse engineering. External habit formation—that is a specification of utility where utility depends not on consumption, but on consumption relative to lagged aggregate consumption—has been introduced to explain the slow adjustment of consumption. Convex costs of changing investment, rather than the more standard and more plausible convex costs of investment, have been introduced to explain the rich dynamics of investment. Backward indexation of prices, an assumption which, as far as I know, is simply factually wrong, has been introduced to explain the dynamics of inflation. And, because, once they are introduced, these assumptions can then be blamed on others, they have often become standard, passed on from model to model with little discussion

In other words, we want to move the model closer to the data, but we do so by offering features that bear little resemblance to actual human behavior. And if the models we write do not actually describe individual behavior at a micro level, can we really still call them “microfounded”?

And They Still Don’t Match the Data

In Part 10 of this series, I discussed the idea that we might not want our models to match the data. That view is not shared by most of the rest of the profession, however, so let’s judge these models on their own terms. Maybe the microfoundations are unrealistic and the shocks misspecified, but, as Friedman argued, who cares about assumptions as long as we get a decent result? New Keynesian models also fail in this regard.

Forecasting in general is very difficult for any economic model, but one would expect that after 40 years of toying with these models, getting closer and closer to an accurate explanation of macroeconomic phenomena, we would be able to make somewhat decent predictions. What good is explaining the past if it can’t help inform our understanding of the future? Here’s what the forecasts of GDP growth look like using a fully featured Smets-Wouters NK model

Source: Gurkaynak et al (2013)

Forecast horizons are quarterly and we want to focus on the DSGE prediction (light blue) against the data (red). One quarter ahead, the model actually does a decent job (probably because of the persistence of growth over time), but its predictive power is quickly lost as we expand the horizon. The table below shows how poor this fit actually is.

Source: Edge and Gurkaynak (2010)

The R^2 values, which represent the amount of variation in the data accounted for by the model are tiny (a value of 1 would be a perfect match to the data). Even more concerning, the authors of the paper compare a DSGE forecast to other forecasting methods and find it cannot improve on a reduced form VAR forecast and is barely an improvement over a random walk forecast that assumes that all fluctuations are completely unpredictable. Forty years of progress in the DSGE research program and the best we can do is add unrealistic frictions and make poor predictions. In the essay linked above, Blanchard concludes “the state of macro is good.” I can’t imagine what makes him believe that.

What’s Wrong With Modern Macro? Part 10 All Models are Wrong, Except When We Pretend They Are Right

Part 10 in a series of posts on modern macroeconomics. This post deals with a common defense of macroeconomic models that “all models are wrong.” In particular, I argue that attempts to match the model to the data contradict this principle.


In my discussion of rational expectations I talked about Milton Friedman’s defense of some of the more dubious assumptions of economic models. His argument was that even though real people probably don’t really think the way the agents in our economic models do, that doesn’t mean the model is a bad explanation of their behavior.

Friedman’s argument ties into a broader point about the realism of an economic model. We don’t necessarily want our model to perfectly match every feature that we see in reality. Modeling the complexity we see in real economies is a titanic task. In order to understand, we need to simplify.

Early DSGE models embraced this sentiment. Kydland and Prescott give a summary of their preferred procedure for doing macroeconomic economic research in a 1991 paper, “The Econometrics of the General Equilibrium Approach to Business Cycles.” They outline five steps. First, a clearly defined research question must be chosen. Given this question, the economist chooses a model economy that is well suited to answering it. Next, the parameters of the model are calibrated to fit based on some criteria (more on this below). Using the model, the researcher can conduct quantitative experiments. Finally, results are reported and discussed.

Throughout their discussion of this procedure, Kydland and Prescott are careful to emphasize that the true model will always remain out of reach, that “all model economies are abstractions and are by definition false.” And if all models are wrong, there is no reason we would expect any model to be able to match all features of the data. An implication of this fact is that we shouldn’t necessarily choose parameters of the model that give us the closest match between model and data. The alternative, which they offered alongside their introduction to the RBC framework, is calibration.

What is calibration? Unfortunately a precise definition doesn’t appear to exist and its use in academic papers has become somewhat slippery (and many times ends up meaning I chose these parameters because another paper used the same ones). But in reading Kydland and Prescott’s explanation, the essential idea is to find values for your parameters from data that is not directly related to the question at hand. In their words, “searching within some parametric class of economies for the one that best fits some set of aggregate time series makes little sense.” Instead, they attempt to choose parameters based on other data. They offer the example of the elasticity of substitution between labor and capital, or between labor and leisure. Both of these parameters can be obtained by looking at studies of human behavior. By measuring how individuals and firms respond to changes in real wages, we can get estimates for these values before looking at the results of the model.

The spirit of the calibration exercise is, I think, generally correct. I agree that economic models can never hope to capture the intricacies of a real economy and therefore if we don’t see some discrepancy between model and data we should be highly suspicious. Unfortunately, I also don’t see how calibration helps us to solve this problem. Two prominent econometricians, Lars Hansen and James Heckman, take issue with calibration. They argue

It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be “plugged into” a representative consumer model to produce an empirically concordant aggregate model…microeconomic technologies can look quite different from their aggregate counterparts.
Hansen and Heckman (1996) – The Empirical Foundations of Calibration

Although Kydland and Prescott and other proponents of the calibration approach wish to draw parameters from sources outside the model, it is impossible to entirely divorce the values from the model itself. The estimation of elasticity of substitution or labor’s share of income or any other parameter of the model is going to depend in large part on the lens through which these parameters are viewed and each model provides a different lens. To think that we can take an estimate from one environment and plug it into another would appear to be too strong an assumption.

There is also another more fundamental problem with calibration. Even if for some reason we believe we have the “correct” parameterization of our model, how can we judge its success? Thomas Sargent, in a 2005 interview, says that “after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.” Most people, when confronted with the realization that their model doesn’t match reality would conclude that their model was wrong. But of course we already knew that. Calibration was the solution, but what does it solve? If we don’t have a good way to test our theories, how can we separate the good from the bad? How can we ever trust the quantitative results of a model in which parameters are likely poorly estimated and which offers no criterion for which it can be rejected? Calibration seems only to claim to be an acceptable response to the idea that “all models are wrong” without actually providing a consistent framework for quantification.

But what is the alternative? Many macroeconomic papers now employ a mix of calibrated parameters and estimated parameters. A common strategy is to use econometric methods to search over all possible parameter values in order to find those that are most likely given the data. But in order to put any significance on the meaning of these parameters, we need to take our model as the truth or as close to the truth. If we estimate the parameters to match the data, we implicitly reject the “all models are wrong” philosophy, which I’m not sure is the right way forward.

And it gets worse. Since estimating a highly nonlinear macroeconomic system is almost always impossible given our current computational constraints, most macroeconomic papers instead linearize the model around a steady state. So even if the model is perfectly specified, we also need the additional assumption that we always stay close enough to this steady state that the linear approximation provides a decent representation of the true model. That assumption often fails. Christopher Carroll shows that a log linearized version of a simple model gives a misleading picture of consumer behavior when compared to the true nonlinear model and that even a second order approximation offers little improvement. This issue is especially pressing in a world where we know a clear nonlinearity is present in the form of the zero lower bound for nominal interest rates and once again we get “unpleasant properties” when we apply linear methods.

Anyone trying to do quantitative macroeconomics faces a rather unappealing choice. In order to make any statement about the magnitudes of economic shocks or policies, we need a method to discipline the numerical behavior of the model. If our ultimate goal is to explain the real world, it makes sense to want to use data to provide this discipline. And yet we know our model is wrong. We know that the data we see was not generated by anything close to the simple models that enable analytic (or numeric) tractability. We want to take the model seriously, but not too seriously. We want to use the data to discipline our model, but we don’t want to reject a model because it doesn’t fit all features of the data. How do we determine the right balance? I’m not sure there is a good answer.

What’s Wrong With Modern Macro? Part 8 Rational Expectations Aren't so Rational

Part 8 in a series of posts on modern macroeconomics. This post continues the criticism of the underlying assumptions of most macroeconomic models. It focuses on the assumption of rational expectations, which was described briefly in Part 2. This post will summarize many of the points I made in a longer survey paper I wrote about expectations in macroeconomics that can be found here.


What economists imagine to be rational forecasting would be considered obviously irrational by anyone in the real world who is minimally rational
Roman Frydman (2013) – Rethinking Economics p. 148

Why Rational Expectations?

Although expectations play a large role in many explanations of business cycles, inflation, and other macroeconomic data, modeling expectations has always been a challenge. Early models of expectations relied on backward looking adaptive expectations. In other words, people form their expectations by looking at past trends and extrapolating them forward. Such a process might seem plausible, but there is a substantial problem with using a purely adaptive formulation of expectations in an economic model.

For example, consider a firm that needs to choose how much of a good to produce before it learns the price. If it expects the price to be high, it will want to produce a lot, and vice versa. If we assume firms expect today’s price to be the same as tomorrow’s, they will consistently be wrong. When the price is low, they expect a low price and produce a small amount. But the low supply leads to a high price in equilibrium. A smart firm would see their errors and revise their expectations in order to profit. As Muth argued in his original defense of rational expectations, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” In equilibrium, these kinds of profit opportunities would be eliminated by intelligent entrepreneurs.

The solution proposed by Muth and popularized in macro by Lucas, was to simply assume that agents had the same model of the economy as the economist. Under rational expectations, an agent does not need to look at past data to make a forecast. Instead, their expectations are model based and forward looking. If an economist can detect consistent errors in an agent’s forecasting, rational expectations assumes that the agents themselves can also detect these errors and correct them.

What Does the Data Tell Us?

The intuitive defense of rational expectations is appealing and certainly rational expectations marked an improvement over previous methods. But if previous models of expectations gave agents too little cognitive ability, rational expectations models give them far too much. Rational expectations leaves little room for disagreement between agents. Each one needs to instantly jump to the “correct” model of the economy (which happens to correspond to the one created by the economist) and assume every other agent has made the exact same jump. As Thomas Sargent put it, rational expectations leads to a “communism of models. All agents inside the model, the econometrician, and God share the same model.”

The problem, as Mankiw et al note, is that “the data easily reject this assumption. Anyone who has looked at survey data on expectations, either those of the general public or those of professional forecasters, can attest to the fact that disagreement is substantial.” Branch (2004) and Carroll (2003) offer further evidence that heterogeneous expectations play an important role in forecasting. Another possible explanation for disagreement in forecasts is that agents have access to different information. Even if each agent knew the correct model of the economy, having access to private information could lead to a different predictions. Mordecai Kurz has argued forcefully that disagreement does not stem from private information, but rather different interpretations of the same information.

Experimental evidence also points to heterogeneous expectations. In a series of “learning to forecast” experiments, Cars Hommes and coauthors have shown that when agents are asked to forecast the price of an asset generated by an unknown model, they appear to resort to simple heuristics that often differ substantially from the rational expectations forecast. Surveying the empirical evidence surrounding the assumptions of macroeconomics as a whole, John Duffy concludes “the evidence to date suggests that human subject behavior is often at odds with the standard micro-assumptions of macroeconomic models.”

“As if” Isn’t Enough to Save Rational Expectations

In response to concerns about the assumptions of economics, Milton Friedman offered a powerful defense. He agreed that it was ridiculous to assume that agents make complex calculations when making economic decisions, but claimed that that is not at all an argument against assuming that they made decisions as if they knew this information. Famously, he gave the analogy of an expert billiard player. Nobody would ever believe that the player planned all of his shots using mathematical equations to determine the exact placement, and yet a physicist who assumed he did make those calculations could provide an excellent model of his behavior.

The same logic applies in economics. Agents who make forecasts using incorrect models will be driven out as they are outperformed by those with better models until only the rational expectations forecast remains. Except this argument only works if rational expectations is actually the best forecast. As soon as we admit that people use various models to forecast, there is no guarantee it will be. Even if some agents know the correct model, they cannot predict the path of economic variables unless they are sure that others are also using that model. Using rational expectations might be the best strategy when others are also using it, but if others are using incorrect models, it may be optimal for me to use an incorrect model as well. In game theory terms, rational expectations is a Nash Equilibrium, but not a dominant strategy (Guesnerie 2010).

Still, Friedman’s argument implies that we shouldn’t worry too much about the assumptions underlying our model as long as it provides predictions that help us understand the world. Rational expectations models also fail in this regard. Even after including many ad hoc fixes to standard models like wage and price stickiness, investment adjustment costs, inflation indexing, and additional shocks, DSGE models with rational expectations still provide weak forecasting ability, losing out to simpler reduced form vector autoregressions (more on this in future posts).

Despite these criticisms, we still need an answer to the Lucas Critique. We don’t want agents to ignore information that could help them to improve their forecasts. Rational expectations ensures they do not, but it is far too strong. Weaker assumptions on how agents use the information to learn and improve their forecasting model over time retain many of the desirable properties of rational expectations while dropping some of its less realistic ones. I will return to these formulations in a future post (but read the paper linked in the intro – and here again – if you’re interested).

 

What’s Wrong With Modern Macro? Part 7 The Illusion of Microfoundations II: The Representative Agent

Part 7 in a series of posts on modern macroeconomics. Part 6 began a criticism of the form of “microfoundations” used in DSGE models. This post continues that critique by emphasizing the flaws in using a “representative agent.” This issue has been heavily scrutinized in the past and so this post primarily offers a synthesis of material found in articles from Alan Kirman and Kevin Hoover (1 and 2). 


One of the key selling points of DSGE models is that they are supposedly derived from microfoundations. Since only individuals can act, all aggregates must necessarily be the result of the interactions of these individuals. Understanding the mechanisms that lie behind the aggregates therefore seems essential to understanding the movements in the aggregates themselves. Lucas’s attack on Keynesian economics was motivated by similar logic. We can observe relationships between aggregate variables, but if we don’t understand the individual behavior that drives these relationships, how do we know if they will hold up when the environment changes?

I agree that ignoring individual behavior is an enormous problem for macroeconomics. But DSGE models do little to solve this problem.

Ideally, microfoundations would mean modeling behavior at an individual level. Each individual would then make choices based on the current economic conditions and these choices would aggregate to macroeconomic variables. Unfortunately, putting enough agents to make this exercise interesting is challenging to do in a mathematical model. As a result, “microfoundations” in a DSGE model usually means assuming that the decisions of all individuals can be summarized by the decisions of a single “representative agent.” Coordination between agents, differences in preferences or beliefs, and even the act of trading that is the hallmark of a market economy are eliminated from the discussion entirely. Although we have some vague notion that these activities are going on in the background, the workings of the model are assumed to be represented by the actions of this single agent.

So our microfoundations actually end up looking a lot closer to an analysis of aggregates than an analysis of true individual behavior. As Kevin Hoover writes, they represent “only a simulacrum of microeconomics, since no agent in the economy really faces the decision problem they represent. Seen that way, representative-agent models are macroeconomic, not microfoundational, models, although macroeconomic models that are formulated subject to an arbitrary set of criteria.” Hoover pushes back against the defense that representative agent models are merely a step towards truly microfounded models, arguing that not only would full microfoundations be infeasible but also that many economists do take the representative agent seriously on its own terms, using it both for quantitative predictions and policy advice.

But is the use of the representative agent really a problem? Even if it fails on its promise to deliver true “microfoundations,” isn’t it still an improvement over neglecting optimizing behavior entirely? Possibly, but using a representative agent offers only a superficial manifestation of individual decision-making, opening the door for misinterpretation. Using a representative agent assumes that the decisions of one agent at a macro level would be made in the same way as the decisions of millions of agents at a micro level. The theoretical basis for this assumption is weak at best.

In a survey of the representative agent approach, Alan Kirman describes many of the problems that arise when many agents are aggregated into a single representative. First, he presents the theoretical results from Sonnenschein (1972), Debreu (1974), and Mantel (1976), which show that even with strong assumptions on the behavior of individual preferences, the equilibrium that results by adding up individual behavior is not necessarily stable or unique.

The problem runs even deeper. Even if we assume aggregation results in a nice stable equilibrium, worrying results begin to arise as soon as we start to do anything with that equilibrium. One of the primary reasons for developing a model in the first place is to see how it reacts to policy changes or other shocks. Using a representative agent to conduct such an analysis implicitly assumes that the new aggregate equilibrium will still correspond to decisions of individuals. Nothing guarantees that it will. Kirman gives a simple example of a two-person economy where the representative agent’s choice makes each individual worse off. The Lucas Critique then applies here just as strongly as it does for old Keynesian models. Despite the veneer of optimization and rational choice, a representative agent model still abstracts from individual behavior in potentially harmful ways.

Of course, macroeconomists have not entirely ignored these criticisms and models with heterogeneous agents have become increasing popular in recent work. However, keeping track of more than one agents makes it nearly impossible to achieve useful mathematical results. The general process for using heterogeneous agents in a DSGE model then is to first prove that these agents can be aggregated and summarized by a set of aggregate equations. Although beginning from heterogeneity and deriving aggregation explicitly helps to ensure that the problems outlined above do not arise, it still imposes severe restrictions on the types of heterogeneity allowed. It would be an extraordinary coincidence if the restrictions that enable mathematical tractability also happen to be the ones relevant for understanding reality.

We are left with two choices. Drop microfoundations or drop DSGE. The current DSGE framework only offers an illusion of microfoundations. It introduces optimizing behavior at an aggregate level, but has difficulty capturing many of the actions essential to the workings of the market economy at a micro level. It is not a first step to discovering a way to model true microfoundations because it is not a tool well-suited to analyzing the behavior of more than one person at a time. Future posts will explore some models that are.

 

 

 

Kevin Malone Economics

Don't Be Like Kevin (Image Source: Wikimedia Commons)
Don’t Be Like Kevin (Image Source: Wikimedia Commons)

In one my favorite episodes of the TV show The Office (A Benihana Christmas), dim-witted accountant Kevin Malone faces a choice between two competing office Christmas parties: the traditional Christmas party thrown by Angela, or Pam and Karen’s exciting new party. After weighing various factors (double fudge brownies…Angela), Kevin decides “I think I’ll go to Angela’s party, because that’s the party I know.” I can’t help but see parallels between Kevin’s reasoning and some of the recent discussions about the role of DSGE models in macroeconomics. It’s understandable. Many economists have poured years of their lives into this research agenda. Change is hard. But sticking with something simply because that’s the way it has always been done is not, in my opinion, a good way to approach research.

I was driven to write this post after reading a recent post by Roger Farmer, which responds to this article by Steve Keen, which is itself responding to Olivier Blanchard’s recent comments on the state of macroeconomics. Lost yet? Luckily you don’t really need to understand the  entire flow of the argument to get my point. Basically, Keen argues that the assumption of economic models that the system is always in equilibrium is poorly supported. He points to physics for comparison, where Edward Lorenz showed that the dynamics underlying weather systems were not random, but chaotic – fully deterministic, but still impossible to predict. Although his model had equilibria, they were inherently unstable, so the system continuously fluctuated between them in an unpredictable fashion.

Keen’s point is that economics could easily be a chaotic system as well. Is there any doubt that the interactions between millions of people that make up the economic system are at least as complex as the weather? Is it possible that, like the weather, the economy is never actually in equilibrium, but rather groping towards it (or, in the case of multiple equilibria, towards one of them)? Isn’t there at least some value in looking for alternatives to the DSGE paradigm?

Roger begins his reply to Keen by claiming, “we tried that 35 years ago and rejected it. Here’s why.” I was curious to read about why it was rejected, but he goes on to say that the results of these attempts to examine whether there is complexity and chaos in economics were that “we have no way of knowing given current data limitations.” Echoing this point, a survey of some of these tests by Barnett and Serletis concludes

We do not have the slightest idea of whether or not asset prices exhibit chaotic nonlinear dynamics produced from the nonlinear structure of the economy…While there have been many published tests for chaotic nonlinear dynamics, little agreement exists among economists about the correct conclusions.
Barnett and Serletis (2000) – Martingales, nonlinearity, and chaos

That doesn’t sound like rejection to me. We have two competing theories without a good way of determining which is more correct. So why should we rigidly stick to one? Best I can tell, the only answer is Kevin Malone Economics™. DSGE models are the way it has always been done, so unless there is some spectacular new evidence that an alternative does better, let’s just stay with what we know. I don’t buy it. The more sensible way forward in my opinion is to cultivate both approaches, to build up models using DSGE as well as alternatives until we have some way of choosing which is better. Some economists have taken this path and begun to explore alternatives, but they remain on the fringe and are largely ignored (I will have a post on some of these soon, but here is a preview). I think this is a huge mistake.

Since Roger is one of my favorite economists in large part because he is not afraid to challenge the orthodoxy, I was a bit surprised to read this argument from him. His entire career has been devoted to models with multiple equilibria and sunspot shocks. As far as I know (and I don’t know much so I could very well be wrong), these features are just as difficult to confirm empirically as is the existence of chaos. By spending an entire career developing his model, he has gotten it to a place where it can shed light on real issues (see his excellent new book, Prosperity for All), but its acceptance within the profession is still low. In his analysis of alternatives to standard DSGE models, Michael De Vroey writes:

Multiple equilibria is a great idea that many would like to adopt were it not for the daunting character of its implementation. Farmer’s work attests to this. Nonetheless, it is hard to resist the judgement that, for all its panache, it is based on a few, hard to swallow, coups de force…I regard Farmer’s work as the fruit of a solo exercise attesting to both the inventiveness of its performer and the plasticity of the neoclassical toolbox. Yet in the Preface of his book [How the Economy Works], he expressed a bigger ambition, “to overturn a way of thinking that has been established among macroeconomists for twenty years.” To this end, he will need to gather a following of economists working on enriching his model
De Vroey (2016) – A History of Macroeconomics from Keynes to Lucas and Beyond

In other words, De Vroey’s assessment of Roger’s work is quite similar to Roger’s own assessment of non-DSGE models: it’s not a bad idea and there’s possibly some potential there, but it’s not quite there yet. Just as I doubt Roger is ready to tear up his work and jump back in with traditional models (and he shouldn’t), neither should those looking for alternatives to DSGE give up simply because the attempts to this point haven’t found anything revolutionary.

I’m not saying all economists should abandon DSGE models. Maybe they are a simple and yet still adequate way of looking at the world. But maybe they’re not. Maybe there are alternatives that provide more insight into the way a complex economy works. Attempts to find these alternatives should not be met with skepticism. They shouldn’t have to drastically outperform current models in order to even be considered (especially considering current models aren’t doing so hot anyway). Of course there is no way any alternative model will be able to stand up to a research program that has gone through 40 years of revision and improvement (although it’s not clear how much improvement there has really been). The only way to find out if there are alternatives worth pursuing is to welcome and encourage researchers looking to expand the techniques of macroeconomics beyond equilibrium and beyond DSGE. If even a fraction of the brainpower currently being applied to DSGE models were shifted to looking for different methods, I am confident that a new framework would flourish and possibly come to stand beside or even replace DSGE models as the primary tool of macroeconomics.

At the end of the episode mentioned above, Kevin (along with everybody else in the office) ends up going to Pam and Karen’s party, which turns out to be way better than Angela’s. I can only hope macroeconomics continues to mirror that plot.

What’s Wrong With Modern Macro? Part 6 The Illusion of Microfoundations I: The Aggregate Production Function

Part 6 in a series of posts on modern macroeconomics. Part 4 noted the issues with using TFP as a measure of technology shocks. Part 5 criticized the use of the HP filter. Although concerning, neither of these problems is necessarily insoluble. With a better measure of technology and a better filtering method, the core of the RBC model would survive. This post begins to tear down that core, starting with the aggregate production function.


In the light of the negative conclusions derived from the Cambridge debates and from the aggregation literature, one cannot help asking why [neoclassical macroeconomists] continue using aggregate production functions
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

Remember that one of the primary reasons DSGE models were able to emerge as the dominant macroeconomic framework was their supposed derivation from “microfoundations.” They aimed to explain aggregate phenomena, but stressed that these aggregates could only come from optimizing behavior at a micro level. The spirit of this idea seems like a step in the right direction.

In its most common implementations, however, “microfoundations” almost always fails to capture this spirit. The problem with trying to generate aggregate implications from individual decision-making is that keeping track of decisions from a diverse group of agents quickly becomes computationally and mathematically unmanageable. This reality led macroeconomists to make assumptions that allowed for easy aggregation. In particular, the millions of decision-makers throughout the economy were collapsed into a single representative agent that owns a single representative firm. Here I want to focus on the firm side. A future post will deal with the problems of the representative agent.

An Aggregate Production Function

In the real world, producing any good is complicated. It not only requires an entrepreneur to have an idea for the production of a new good, but also the ability to implement that idea. It requires a proper assessment of the resources needed, the organization of a firm, hiring good workers and managers, obtaining investors and capital, properly estimating consumer demand and competition from other firms and countless other factors.

In macro models, production is simple. In many models, a single firm produces a single good, using labor and capital as its only two inputs. Of course we cannot expect the model to match many of the features of reality. It is supposed to simplify. That’s what makes it a model. But we also need to remember Einstein’s famous insight that everything should be made as simple as possible, but no simpler. Unfortunately I think we’ve gone way past that point.

Can We Really Measure Capital?

In the broadest possible categorization, productive inputs are usually placed into three categories: land, labor, and capital. Although both land and labor vary widely in quality making it difficult to aggregate, at least there are clear choices for their units. Adding up all of the useable land area and all of the hours worked at least gives us a crude measure of the quantity of land and labor inputs.

But what can we use for capital? Capital goods are far more diverse than land or labor, varying in size, mobility, durability, and thousands of other factors. There is no obvious measure that can combine buildings, machines, desks, computers, and millions of other specialized pieces of capital equipment. The easiest choice is to use the value of the goods, but what is their value? The price at which they were bought? An estimate of the value of the production they will bring about in the future? And what units will this value be? Dollars? Labor hours needed to produce the good?

None of these seem like good options. As soon as we go to value, we are talking about units that depend on the structure of the economy itself. An hour of labor is an hour of labor no matter what the economy looks like. With capital it gets more complicated. The same computer has a completely different value in 1916 than it does in 2016 no matter which concept of value is employed. That value is probably related to its productive capacity, but the relationship is far from clear. If the value of a firm’s capital stock has increased, can it actually produce more than before?

This question was addressed in the 1950s and 60s by Joan Robinson. Here’s how she summed up the problem:

Moreover, the production function has been a powerful instrument of miseducation. The student of economic theory is taught to write O = f (L, C) where L is a quantity of labour, C a quantity of capital and O a rate of output of commodities.’ He is instructed to assume all workers alike, and to measure L in man-hours of labour; he is told something about the index-number problem involved in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units C is measured. Before ever he does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.
Joan Robinson (1953) – The Production Function and the Theory of Capital

60 years later and we’ve changed the O to a Y and the C to a K, but little else has changed. People have thought hard about how to measure capital and try to deal with the issues (Here is a 250 page document on the OECD’s methods for example), but the issue remains. We still take a diverse set of capital goods and try to fit them all under a common label. The so called “Cambridge Capital Controversy” was never truly resolved and neoclassical economists simply pushed on undeterred. For a good summary of the debate and its (lack of) resolution see this relatively non-technical paper.

What Assumptions Allow for Aggregation?

Even if we assume that there is some unit that would allow capital to be aggregated, we still face problems when trying to use an aggregate production function. One of the leading researchers working to find the set of conditions that allows for aggregation in production has been Franklin Fisher. Giving a more rigorous treatment to the capital aggregation issue, Fisher shows that capital can only be aggregated if all firms share the same constant returns to scale, capital augmenting technical change production function (except for a capital efficiency coefficient). If you’re not an economist, that condition doesn’t make much sense, but know that it is incredibly restrictive.

The problem doesn’t get much better when we move away from capital and try to aggregate labor and output. Fisher shows that aggregation in these concepts is only possible when there is no specialization (everybody can do everything) and all firms have the ability to produce every good (the amount of each good can change). Other authors have derived different conditions for aggregation, but none of these appear to be any less restrictive.

Do any of these restrictions matter? Even if aggregation is not strictly possible, as long as the model was close enough there wouldn’t be a real criticism. Fisher (with co-author Jesus Felipe) surveys the many arguments against aggregate production functions and addresses many of these kinds of counterarguments, ultimately concluding

The aggregation problem and its consequences, and the impossibility of testing empirically the aggregate production function…are substantially more serious than a mere anomaly. Macroeconomists should pause before continuing to do applied work with no sound foundation and dedicate some time to studying other approaches to value, distribution, employment, growth, technical progress etc., in order to understand which questions can legitimately be posed to the empirical aggregate data.
Felipe and Fisher (2003) – Aggregation in Production Functions: What Applied Economists Should Know

As far as I know, these concerns have not been addressed in a serious way. The aggregate production function continues to be a cornerstone of macroeconomic models. If it is seriously flawed, almost all of the work done in the last forty years becomes suspect.

But don’t worry, the worst is still to come.

What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.


A plot of real GDP in the United States since 1947 looks like this

Real GDP in the United States since 1947. Data from FRED
Real GDP in the United States since 1947. Data from FRED

Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this

Real GDP against HP trend (smoothing parameter = 1600)
Real GDP against HP trend (smoothing parameter = 1600)

By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with

Deviations from HP trend
Deviations from HP trend

The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.

Let’s go back one more time to these graphs

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.

Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.

There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.

TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.