What’s Wrong With Modern Macro? Part 11 Building on a Broken Foundation

Part 11 in a series of posts on modern macroeconomics. The discussion to this point has focused on the early contributions to DSGE modeling by Kydland and Prescott. Obviously, in the last 30 years, economic research has advanced beyond their work. In this post I will argue that it has advanced in the wrong direction.


In Part 3 of this series I outlined the Real Business Cycle (RBC) model that began macroeconomics on its current path. The rest of the posts in this series have offered a variety of reasons why the RBC framework fails to provide an adequate model of the economy. An easy response to this criticism is that it was never intended to be a final answer to macroeconomic modeling. Even if Kydland and Prescott’s model misses important features of a real economy doesn’t it still provide a foundation to build upon? Since the 1980s, the primary agenda of macroeconomic research has been to build on this base, to relax some of the more unrealistic assumptions that drove the original RBC model and to add new assumptions to better match economic patterns seen in the data.

New Keynesian Models

Part of the appeal of the RBC model was its simplicity. Driving business cycles was a single shock: exogenous changes in TFP attributed to changes in technology. However, the real world is anything but simple. In many ways, the advances in the methods used by macroeconomists in the 1980s came only at the expense of a rich understanding of the complex interactions of a market economy. Compared to Keynes’s analysis 50 years earlier, the DSGE models of the 1980s seem a bit limited.

What Keynes lacked was clarity. 80 years after the publication of the General Theory, economists still debate “what Keynes really meant.” For all of their faults, DSGE models at the very least guarantee clear exposition (assuming you understand the math) and internal consistency. New Keynesian (NK) models attempt to combine these advantages with some of the insights of Keynes.

The simplest distinction between an RBC model and an NK model is the assumptions made about the flexibility of prices. In early DSGE models, prices (including wages) were assumed to be fully flexible, constantly adjusting in order to ensure that all markets cleared. The problem (or benefit depending on your point of view) with perfectly flexible prices is that it makes it difficult for policy to have any impact. Combining rational expectations with flexible prices means monetary shocks cannot have any real effects. Prices and wages simply adjust immediately to return to the same allocation of real output.

To reintroduce a role for policy, New Keynesians instead assume that prices are sticky. Now when a monetary shock hits the economy firms are unable to change their price so they respond to the higher demand by increasing real output. As research in the New Keynesian program developed, more and more was added to the simple RBC base. The first RBC model was entirely frictionless and included only one shock (to TFP). It could be summarized in just a few equations. A more recent macro model, The widely cited Smets and Wouters (2007), adds frictions like investment adjustment costs, variable capital utilization, habit formation, and inflation indexing in addition to 7 structural shocks, leading to dozens of equilibrium equations.

Building on a Broken Foundation

New Keynesian economists have their heart in the right place. It’s absolutely true that the RBC model produces results at odds with much of reality. Attempting to move the model closer to the data is certainly an admirable goal. But, as I have argued in the first 10 posts of this series, the RBC model fails not so much because it abstracts from the features we see in real economies, but because the assumptions it makes are not well suited to explain economic interactions in the first place.

New Keynesian models do almost nothing to address the issues I have raised. They still rely on aggregating firms and consumers into representative entities that offer only the illusion of microfoundations rather than truly deriving aggregate quantities from realistic human behavior. They still assume rational expectations, meaning every agent believes that the New Keynesian model holds and knows that every other agent also believes it holds when they make their decisions. And they still make quantitative policy proposals by assuming a planner that knows the entire economic environment.

One potential improvement of NK models over their RBC counterparts is less reliance on (potentially misspecified) technology shocks as the primary driver of business cycle fluctuations. The “improvement,” however, comes only by introducing other equally suspect shocks. A paper by V.V. Chari, Patrick Kehoe, and Ellen McGrattan argues that four of the “structural shocks” in the Smets-Wouters setup do not have a clear interpretation. In other words, although the shocks can help the model match the data, the reason why they are able to do so is not entirely clear. For example, one of the shocks in the model is a wage markup over the marginal productivity of a worker. The authors argue that this could be caused either by increased bargaining power (through unions, etc.), which would call for government intervention to break up unions, or through an increased preference for leisure, which is an efficient outcome and requires no policy. The Smets-Wouters model can say nothing about which interpretation is more accurate and is therefore unhelpful in prescribing policy to improve the economy.

Questionable Microfoundations

I mentioned above that the main goal of the New Keynesian model was to revisit the features Keynes talked about decades ago using modern methods. In today’s macro landscape, microfoundations have become essential. Keynes spoke in terms of broad aggregates, but without understanding how those aggregates are derived from optimizing behavior at a micro level they fall into the Lucas critique. But although the NK model attempts to put in more realistic microfoundations, many of its assumptions seem at odds with the ways in which individuals actually act.

Take the assumption of sticky prices. It’s probably a valid assumption to include in an economic model. With no frictions, an optimizing firm would want to change its price every time new economic data became available. Obviously that doesn’t happen as many goods stay at a constant price for months or years. So of course if we want to match the data we are going to need some reason for firms to hold prices constant. The most common way to achieve this result is called “Calvo Pricing” in honor of a paper by Guillermo Calvo. And how do we ensure that some prices in the economy remain sticky? We simply assign each firm some probability that they will be unable to change their price. In other words, we assume exactly the result we want to get.

Of course, economists have recognized that Calvo pricing isn’t really a satisfactory final answer, so they have worked to microfound Calvo’s idea through an optimization problem. The most common method, proposed by Rotemberg in 1982, is called the menu cost model. In this model of price setting, a firm must pay some cost every time it changes its prices. This cost ensures that a firm will only change its price when it is especially profitable to do so. Small price changes every day no longer make sense. The term menu cost comes from the example of a firm needing to print out new menus every time it changes one of its prices, but it can be applied more broadly to encompass every cost a firm might incur by changing a price. More specifically, price changes could have an adverse effect on consumers, causing some to leave the store. It could require more advanced computer systems, more complex interactions with suppliers, competitive games between firms.

But by wrapping all of these into one “menu cost,” am I really describing a firm’s problem. A “true” microfoundation would explicitly model each of these features at a firm level with different speeds of price adjustment and different reasons causing firms to hold prices steady. What do we gain by adding optimization to our models if that optimization has little to do with the problem a real firm would face? Are we any better off with “fake” microfoundations than we were with statistical aggregates. I can’t help but think that the “microfoundations” in modern macro models are simply fancy ways of finagling optimization problems until we get the result we want. Interesting mathematical exercise? Maybe. Improving our understanding of the economy? Unclear.

Other additions to the NK model that purport to move the model closer to reality are also suspect. As Olivier Blanchard describes in his 2008 essay, “The State of Macro:”

Reconciling the theory with the data has led to a lot of un- convincing reverse engineering. External habit formation—that is a specification of utility where utility depends not on consumption, but on consumption relative to lagged aggregate consumption—has been introduced to explain the slow adjustment of consumption. Convex costs of changing investment, rather than the more standard and more plausible convex costs of investment, have been introduced to explain the rich dynamics of investment. Backward indexation of prices, an assumption which, as far as I know, is simply factually wrong, has been introduced to explain the dynamics of inflation. And, because, once they are introduced, these assumptions can then be blamed on others, they have often become standard, passed on from model to model with little discussion

In other words, we want to move the model closer to the data, but we do so by offering features that bear little resemblance to actual human behavior. And if the models we write do not actually describe individual behavior at a micro level, can we really still call them “microfounded”?

And They Still Don’t Match the Data

In Part 10 of this series, I discussed the idea that we might not want our models to match the data. That view is not shared by most of the rest of the profession, however, so let’s judge these models on their own terms. Maybe the microfoundations are unrealistic and the shocks misspecified, but, as Friedman argued, who cares about assumptions as long as we get a decent result? New Keynesian models also fail in this regard.

Forecasting in general is very difficult for any economic model, but one would expect that after 40 years of toying with these models, getting closer and closer to an accurate explanation of macroeconomic phenomena, we would be able to make somewhat decent predictions. What good is explaining the past if it can’t help inform our understanding of the future? Here’s what the forecasts of GDP growth look like using a fully featured Smets-Wouters NK model

Source: Gurkaynak et al (2013)

Forecast horizons are quarterly and we want to focus on the DSGE prediction (light blue) against the data (red). One quarter ahead, the model actually does a decent job (probably because of the persistence of growth over time), but its predictive power is quickly lost as we expand the horizon. The table below shows how poor this fit actually is.

Source: Edge and Gurkaynak (2010)

The R^2 values, which represent the amount of variation in the data accounted for by the model are tiny (a value of 1 would be a perfect match to the data). Even more concerning, the authors of the paper compare a DSGE forecast to other forecasting methods and find it cannot improve on a reduced form VAR forecast and is barely an improvement over a random walk forecast that assumes that all fluctuations are completely unpredictable. Forty years of progress in the DSGE research program and the best we can do is add unrealistic frictions and make poor predictions. In the essay linked above, Blanchard concludes “the state of macro is good.” I can’t imagine what makes him believe that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.