Imagine being a chemist in the Middle Ages. Your colleagues are all working on fanciful tasks like trying to develop a philosopher’s stone to grant immortality or turning lead into gold. You continually point out to them that all of their attempts have pretty much failed completely. In fact, based on your own research, you have strong reason to believe that the path they are taking is going completely the wrong direction. They aren’t just failing because they haven’t found the right formula to transform metal into gold, but because the task they have set out to do cannot be done. You urge them to abandon their efforts and focus on other areas, but they don’t seem to listen. Instead, their response: “well sure we haven’t been able to turn lead into gold yet, but can you do any better? I don’t see any gold in your hands either.”
You can probably already see where I’m going with this metaphor. Replace chemist with economist and turning lead into gold with DSGE macroeconomics and you’ll have a good sense of what criticizing macro feels like. I don’t think it’s an exaggeration to say that every critique of macro is invariably going to be met by some form of the same counterargument. Of course the model isn’t perfect, but we’re doing our best. We’re continually adding the features to the benchmark model that you claim we are missing. Heterogeneity, financial markets, even behavioral assumptions. They’re coming. And if you’re so smart, come up with something better. It takes a model to beat a model.
I won’t deny there is some truth to this argument. Just because a model is unrealistic, just because it’s missing some feature of reality, doesn’t mean it isn’t useful. It doesn’t mean it isn’t a reasonable first step on the path to something better. The financial crisis didn’t prove that the methods of macroeconomics are wrong. Claims that nobody in the macro profession is asking interesting questions or trying to implement interesting ideas are demonstrably false. We certainly don’t want criticisms of macro to lead to less study of the topic. The questions are too important. The potential gains from solving problems like business cycles or economic growth too great. And if we don’t have anything better, why not keep pushing forward? Why not take the DSGE apparatus as far as we possibly can?
But what I, and many others, have tried to show, is that the methods of macroeconomics are severely constrained by assumptions with questionable theoretical or empirical backing. The foundation that modern macroeconomics is building on is too shaky to support the kinds of improvements that we hope it will eventually make. Now, you can certainly argue that I am wrong and that DSGE models are perfectly capable of answering the questions we ask. That’s quite possible. But if I am not wrong about the flaws in the method, then we shouldn’t need a new model to think about giving up on the current one. When somebody points out a flaw in the foundation of a building, the proper response is not to keep building until they come up with a better solution. It’s to knock the building down and focus all effort on finding that solution.
I can confidently say that if a better alternative to DSGE exists, I will not be the one to develop it. I am nowhere near smart or creative enough to do that. I don’t think any one person is. What I have been trying to do is convince others that it’s worth devoting a little bit more of our research efforts to exploring other methods and to challenge the fundamental assumptions that have held a near monopoly on macroeconomic research for the last 40 years. The more people that focus on finding a model to beat the current model, the better chance we have to actually find one. As economists, we should at the very least be open to the idea that competition is good.
Where do we start? I think agent based simulation models offer one potential path. The key benefit of moving toward a simulation model over a mathematical one is that concerns about tractability are much less pressing. Many of the most concerning assumptions of standard macro models are made because without them the model becomes unsolvable. With an agent based model, it is much easier to incorporate features like heterogeneity and diverse behavioral assumptions and just let the simulation sort out what happens. Equilibrium in an agent based model is not an assumption, but an emergent result. The downside is that an ABM cannot produce nice closed form analytical solutions. But in a world as complex as ours I think restricting ourselves to only being able to answer questions that allow for a closed form solution is a pretty bad idea – it’s looking for your keys under the streetlight because that’s where the light is.
Maybe even more important than developing specific models to challenge the DSGE benchmark is to try to introduce a little more humility into the modeling process. As I’ve discussed before, there isn’t much of a reason to put any stock in quantitative predictions of models that we know bear little resemblance to reality. Estimating the effects of policy to a decimal point is just not something we are capable of doing right now. Let’s stop pretending we can.
A day may come when the old guard of macroeconomics convinces this starry eyed graduate student to give up his long battle against the evils of DSGE models in macroeconomics. But it is not this day.
Shockingly, it seems like many top economists have not yet discovered my superior critique of macroeconomics (because obviously if they had they would be convinced to stop defending DSGE). Instead, we get statements like:
Macroeconomic policy questions involve trade-offs between competing forces in the economy. The problem is how to assess the strength of those forces for the particular policy question at hand. One strategy is to perform experiments on actual economies. Unfortunately, this strategy is not available to social scientists. The only place that we can do experiments is in dynamic stochastic general equilibrium (DSGE) models
That’s from the recent paper “On DSGE Models” written by three prominent DSGE modelers, Lawrence Christiano, Martin Eichenbaum, and Mathias Trabandt (who I’ll call CET). As you might suspect, I disagree. And I expect their defense will be about as effective at shifting the debate as my critique was (and since my readership can be safely rounded down to 0, that’s not a compliment). Unlike Olivier Blanchard’s recent thoughts on DSGE models, which conceded that many of the criticisms of DSGE models actually contained some truth, CET leave no room for alternatives. It’s DSGE or bust and “people who don’t like dynamic stochastic general equilibrium (DSGE) models are dilettantes.” So we’re off to a good start.
But let’s avoid the ad hominem as much as we can and get to the economics. Beyond the obviously false statement that DSGE models are the only models where we can do experiments, CET don’t offer much we haven’t heard before. Their story is by now pretty standard. They begin by admitting that yes, of course RBC models with their emphasis on technology shocks, complete markets, and policy ineffectiveness were woefully inadequate. But we’re better now! Macroeconomics has come a long way in the last 35 years! They then proceed to provide answers to the common criticisms of DSGE modeling.
Worried DSGE models don’t include a role for finance? Clearly you’ve never heard of Carlstrom and Fuerst (1997) or Bernanke et al. (1999) which include financial accelerator effects. Maybe you’re more concerned about shadow banking? Gertler and Kiyotaki (2015) have you covered. Zero lower bound? Please, Krugman had that one wrapped up all the way back in 1998. Want a role for government spending? Monetary policy? Here’s 20 models that give you the results you want.
Essentially, CET try to take everything that critics of DSGE models say is missing and show that actually many researchers do include these features. This strategy is common in any rebuttal to attacks on DSGE models. Every time somebody points out a flaw in one class of models (representative agent models, rational expectations models, models that use HP-filtered data, complete markets, etc.) they point to another group of models that purports to solve these problems. In doing so they miss the point of these critiques entirely.
The problem with DSGE models is not that they are unable to explain specific economic phenomenon. The problem is that they can explain almost any economic phenomenon you can possibly imagine and we have essentially no way to decide which models are better or worse than others except by comparing them to data that they were explicitly designed to match. It’s true there were models written before the recession that contained features that looked a lot like those in the crisis. We just had no reason to look at those models over the hundreds of other ones that had entirely different implications.
Whatever idea you can dream up, you can almost be sure that somebody has written a DSGE model to capture it. Too much of what DSGE models end up being is mathematical justifications for ideas people have already worked out intuitively in their minds (often stripped of much of the nuance that made the idea interesting in the first place). All the DSGE model itself adds is a set of assumptions everybody knows are false that generate those intuitive results. CET do nothing to address this criticism.
Take CET’s defense of representative agent models. They say “It has been known for decades that restrictions like (1)[the standard Euler equation] can be rejected, even in representative agent models that allow for habit formation. So, why would anyone ever use the representative agent assumption? In practice analysts have used that assumption because they think that for many questions they get roughly the right answer.”
Interesting. If they already knew the right answer, what was the model for again? Everybody agrees the assumptions are completely bogus, but it gets the result we wanted so who cares? That’s really the argument they want to make here?
But this is the game we play. Despite CET’s claims to the contrary, I am almost certain that most macroeconomics papers begin with the result. Once they know what they want to prove, it becomes a matter of finagling a model that sounds somewhat like it could be related to how an actual economy works and produces the desired result (and when this task can’t be done, it becomes a “puzzle”). The recession clearly demonstrated the importance of finance on the economy? Simple. Let’s write a DSGE model where finance is important. If in the end the model actually shows that finance is unimportant rewrite it until you get the answer you want.
Almost every DSGE macroeconomics paper follows pretty much the same outline. First, they present some stylized facts from macroeconomic data. Next, they review the current literature and explain why it is unable to fit those facts. Then they introduce their new model with slightly different assumptions that can fit the facts and brag about how well the model (that they designed specifically to fit the facts) actually fits the facts. Finally they do “experiments” using their model to show how different policies could have changed economic outcomes.
In my view, macroeconomics should be exactly the opposite. Don’t bother trying to exactly match macroeconomic aggregates for the United States economy with a model that looks nothing like the United States economy. Have a little more humility. Instead, start by getting the assumptions right. Since we will never be able to capture all of the intricacies of a true economy, the model economy should look very different from a real economy. However, if the assumptions that generate that economy are realistic, it might still provide answers that are relevant for the real world. A model that gets the facts right but the assumptions wrong probably does not.
I spent 15 posts arguing that the DSGE paradigm gets the assumptions spectacularly wrong. CET provide many examples of people using these flawed assumptions to try to give us answers to many interesting questions. They do not, however, provide any reason for us to believe those answers.
But, then again, I am but a dilettante, so you probably shouldn’t believe me either.
Part 11 in a series of posts on modern macroeconomics. The discussion to this point has focused on the early contributions to DSGE modeling by Kydland and Prescott. Obviously, in the last 30 years, economic research has advanced beyond their work. In this post I will argue that it has advanced in the wrong direction.
In Part 3 of this series I outlined the Real Business Cycle (RBC) model that began macroeconomics on its current path. The rest of the posts in this series have offered a variety of reasons why the RBC framework fails to provide an adequate model of the economy. An easy response to this criticism is that it was never intended to be a final answer to macroeconomic modeling. Even if Kydland and Prescott’s model misses important features of a real economy doesn’t it still provide a foundation to build upon? Since the 1980s, the primary agenda of macroeconomic research has been to build on this base, to relax some of the more unrealistic assumptions that drove the original RBC model and to add new assumptions to better match economic patterns seen in the data.
New Keynesian Models
Part of the appeal of the RBC model was its simplicity. Driving business cycles was a single shock: exogenous changes in TFP attributed to changes in technology. However, the real world is anything but simple. In many ways, the advances in the methods used by macroeconomists in the 1980s came only at the expense of a rich understanding of the complex interactions of a market economy. Compared to Keynes’s analysis 50 years earlier, the DSGE models of the 1980s seem a bit limited.
What Keynes lacked was clarity. 80 years after the publication of the General Theory, economists still debate “what Keynes really meant.” For all of their faults, DSGE models at the very least guarantee clear exposition (assuming you understand the math) and internal consistency. New Keynesian (NK) models attempt to combine these advantages with some of the insights of Keynes.
The simplest distinction between an RBC model and an NK model is the assumptions made about the flexibility of prices. In early DSGE models, prices (including wages) were assumed to be fully flexible, constantly adjusting in order to ensure that all markets cleared. The problem (or benefit depending on your point of view) with perfectly flexible prices is that it makes it difficult for policy to have any impact. Combining rational expectations with flexible prices means monetary shocks cannot have any real effects. Prices and wages simply adjust immediately to return to the same allocation of real output.
To reintroduce a role for policy, New Keynesians instead assume that prices are sticky. Now when a monetary shock hits the economy firms are unable to change their price so they respond to the higher demand by increasing real output. As research in the New Keynesian program developed, more and more was added to the simple RBC base. The first RBC model was entirely frictionless and included only one shock (to TFP). It could be summarized in just a few equations. A more recent macro model, The widely cited Smets and Wouters (2007), adds frictions like investment adjustment costs, variable capital utilization, habit formation, and inflation indexing in addition to 7 structural shocks, leading to dozens of equilibrium equations.
Building on a Broken Foundation
New Keynesian economists have their heart in the right place. It’s absolutely true that the RBC model produces results at odds with much of reality. Attempting to move the model closer to the data is certainly an admirable goal. But, as I have argued in the first 10 posts of this series, the RBC model fails not so much because it abstracts from the features we see in real economies, but because the assumptions it makes are not well suited to explain economic interactions in the first place.
New Keynesian models do almost nothing to address the issues I have raised. They still rely on aggregating firmsand consumers into representative entities that offer only the illusion of microfoundations rather than truly deriving aggregate quantities from realistic human behavior. They still assume rational expectations, meaning every agent believes that the New Keynesian model holds and knows that every other agent also believes it holds when they make their decisions. And they still make quantitative policy proposals by assuming a planner that knows the entire economic environment.
One potential improvement of NK models over their RBC counterparts is less reliance on (potentially misspecified) technology shocks as the primary driver of business cycle fluctuations. The “improvement,” however, comes only by introducing other equally suspect shocks. A paper by V.V. Chari, Patrick Kehoe, and Ellen McGrattan argues that four of the “structural shocks” in the Smets-Wouters setup do not have a clear interpretation. In other words, although the shocks can help the model match the data, the reason why they are able to do so is not entirely clear. For example, one of the shocks in the model is a wage markup over the marginal productivity of a worker. The authors argue that this could be caused either by increased bargaining power (through unions, etc.), which would call for government intervention to break up unions, or through an increased preference for leisure, which is an efficient outcome and requires no policy. The Smets-Wouters model can say nothing about which interpretation is more accurate and is therefore unhelpful in prescribing policy to improve the economy.
Questionable Microfoundations
I mentioned above that the main goal of the New Keynesian model was to revisit the features Keynes talked about decades ago using modern methods. In today’s macro landscape, microfoundations have become essential. Keynes spoke in terms of broad aggregates, but without understanding how those aggregates are derived from optimizing behavior at a micro level they fall into the Lucas critique. But although the NK model attempts to put in more realistic microfoundations, many of its assumptions seem at odds with the ways in which individuals actually act.
Take the assumption of sticky prices. It’s probably a valid assumption to include in an economic model. With no frictions, an optimizing firm would want to change its price every time new economic data became available. Obviously that doesn’t happen as many goods stay at a constant price for months or years. So of course if we want to match the data we are going to need some reason for firms to hold prices constant. The most common way to achieve this result is called “Calvo Pricing” in honor of a paper by Guillermo Calvo. And how do we ensure that some prices in the economy remain sticky? We simply assign each firm some probability that they will be unable to change their price. In other words, we assume exactly the result we want to get.
Of course, economists have recognized that Calvo pricing isn’t really a satisfactory final answer, so they have worked to microfound Calvo’s idea through an optimization problem. The most common method, proposed by Rotemberg in 1982, is called the menu cost model. In this model of price setting, a firm must pay some cost every time it changes its prices. This cost ensures that a firm will only change its price when it is especially profitable to do so. Small price changes every day no longer make sense. The term menu cost comes from the example of a firm needing to print out new menus every time it changes one of its prices, but it can be applied more broadly to encompass every cost a firm might incur by changing a price. More specifically, price changes could have an adverse effect on consumers, causing some to leave the store. It could require more advanced computer systems, more complex interactions with suppliers, competitive games between firms.
But by wrapping all of these into one “menu cost,” am I really describing a firm’s problem. A “true” microfoundation would explicitly model each of these features at a firm level with different speeds of price adjustment and different reasons causing firms to hold prices steady. What do we gain by adding optimization to our models if that optimization has little to do with the problem a real firm would face? Are we any better off with “fake” microfoundations than we were with statistical aggregates. I can’t help but think that the “microfoundations” in modern macro models are simply fancy ways of finagling optimization problems until we get the result we want. Interesting mathematical exercise? Maybe. Improving our understanding of the economy? Unclear.
Other additions to the NK model that purport to move the model closer to reality are also suspect. As Olivier Blanchard describes in his 2008 essay, “The State of Macro:”
Reconciling the theory with the data has led to a lot of un- convincing reverse engineering. External habit formation—that is a specification of utility where utility depends not on consumption, but on consumption relative to lagged aggregate consumption—has been introduced to explain the slow adjustment of consumption. Convex costs of changing investment, rather than the more standard and more plausible convex costs of investment, have been introduced to explain the rich dynamics of investment. Backward indexation of prices, an assumption which, as far as I know, is simply factually wrong, has been introduced to explain the dynamics of inflation. And, because, once they are introduced, these assumptions can then be blamed on others, they have often become standard, passed on from model to model with little discussion
In other words, we want to move the model closer to the data, but we do so by offering features that bear little resemblance to actual human behavior. And if the models we write do not actually describe individual behavior at a micro level, can we really still call them “microfounded”?
And They Still Don’t Match the Data
In Part 10 of this series, I discussed the idea that we might not want our models to match the data. That view is not shared by most of the rest of the profession, however, so let’s judge these models on their own terms. Maybe the microfoundations are unrealistic and the shocks misspecified, but, as Friedman argued, who cares about assumptions as long as we get a decent result? New Keynesian models also fail in this regard.
Forecasting in general is very difficult for any economic model, but one would expect that after 40 years of toying with these models, getting closer and closer to an accurate explanation of macroeconomic phenomena, we would be able to make somewhat decent predictions. What good is explaining the past if it can’t help inform our understanding of the future? Here’s what the forecasts of GDP growth look like using a fully featured Smets-Wouters NK model
Forecast horizons are quarterly and we want to focus on the DSGE prediction (light blue) against the data (red). One quarter ahead, the model actually does a decent job (probably because of the persistence of growth over time), but its predictive power is quickly lost as we expand the horizon. The table below shows how poor this fit actually is.
The values, which represent the amount of variation in the data accounted for by the model are tiny (a value of 1 would be a perfect match to the data). Even more concerning, the authors of the paper compare a DSGE forecast to other forecasting methods and find it cannot improve on a reduced form VAR forecast and is barely an improvement over a random walk forecast that assumes that all fluctuations are completely unpredictable. Forty years of progress in the DSGE research program and the best we can do is add unrealistic frictions and make poor predictions. In the essay linked above, Blanchard concludes “the state of macro is good.” I can’t imagine what makes him believe that.
Part 10 in a series of posts on modern macroeconomics. This post deals with a common defense of macroeconomic models that “all models are wrong.” In particular, I argue that attempts to match the model to the data contradict this principle.
In my discussion of rational expectations I talked about Milton Friedman’s defense of some of the more dubious assumptions of economic models. His argument was that even though real people probably don’t really think the way the agents in our economic models do, that doesn’t mean the model is a bad explanation of their behavior.
Friedman’s argument ties into a broader point about the realism of an economic model. We don’t necessarily want our model to perfectly match every feature that we see in reality. Modeling the complexity we see in real economies is a titanic task. In order to understand, we need to simplify.
Early DSGE models embraced this sentiment. Kydland and Prescott give a summary of their preferred procedure for doing macroeconomic economic research in a 1991 paper, “The Econometrics of the General Equilibrium Approach to Business Cycles.” They outline five steps. First, a clearly defined research question must be chosen. Given this question, the economist chooses a model economy that is well suited to answering it. Next, the parameters of the model are calibrated to fit based on some criteria (more on this below). Using the model, the researcher can conduct quantitative experiments. Finally, results are reported and discussed.
Throughout their discussion of this procedure, Kydland and Prescott are careful to emphasize that the true model will always remain out of reach, that “all model economies are abstractions and are by definition false.” And if all models are wrong, there is no reason we would expect any model to be able to match all features of the data. An implication of this fact is that we shouldn’t necessarily choose parameters of the model that give us the closest match between model and data. The alternative, which they offered alongside their introduction to the RBC framework, is calibration.
What is calibration? Unfortunately a precise definition doesn’t appear to exist and its use in academic papers has become somewhat slippery (and many times ends up meaning I chose these parameters because another paper used the same ones). But in reading Kydland and Prescott’s explanation, the essential idea is to find values for your parameters from data that is not directly related to the question at hand. In their words, “searching within some parametric class of economies for the one that best fits some set of aggregate time series makes little sense.” Instead, they attempt to choose parameters based on other data. They offer the example of the elasticity of substitution between labor and capital, or between labor and leisure. Both of these parameters can be obtained by looking at studies of human behavior. By measuring how individuals and firms respond to changes in real wages, we can get estimates for these values before looking at the results of the model.
The spirit of the calibration exercise is, I think, generally correct. I agree that economic models can never hope to capture the intricacies of a real economy and therefore if we don’t see some discrepancy between model and data we should be highly suspicious. Unfortunately, I also don’t see how calibration helps us to solve this problem. Two prominent econometricians, Lars Hansen and James Heckman, take issue with calibration. They argue
It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be “plugged into” a representative consumer model to produce an empirically concordant aggregate model…microeconomic technologies can look quite different from their aggregate counterparts. Hansen and Heckman (1996) – The Empirical Foundations of Calibration
Although Kydland and Prescott and other proponents of the calibration approach wish to draw parameters from sources outside the model, it is impossible to entirely divorce the values from the model itself. The estimation of elasticity of substitution or labor’s share of income or any other parameter of the model is going to depend in large part on the lens through which these parameters are viewed and each model provides a different lens. To think that we can take an estimate from one environment and plug it into another would appear to be too strong an assumption.
There is also another more fundamental problem with calibration. Even if for some reason we believe we have the “correct” parameterization of our model, how can we judge its success? Thomas Sargent, in a 2005 interview, says that “after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.” Most people, when confronted with the realization that their model doesn’t match reality would conclude that their model was wrong. But of course we already knew that. Calibration was the solution, but what does it solve? If we don’t have a good way to test our theories, how can we separate the good from the bad? How can we ever trust the quantitative results of a model in which parameters are likely poorly estimated and which offers no criterion for which it can be rejected? Calibration seems only to claim to be an acceptable response to the idea that “all models are wrong” without actually providing a consistent framework for quantification.
But what is the alternative? Many macroeconomic papers now employ a mix of calibrated parameters and estimated parameters. A common strategy is to use econometric methods to search over all possible parameter values in order to find those that are most likely given the data. But in order to put any significance on the meaning of these parameters, we need to take our model as the truth or as close to the truth. If we estimate the parameters to match the data, we implicitly reject the “all models are wrong” philosophy, which I’m not sure is the right way forward.
And it gets worse. Since estimating a highly nonlinear macroeconomic system is almost always impossible given our current computational constraints, most macroeconomic papers instead linearize the model around a steady state. So even if the model is perfectly specified, we also need the additional assumption that we always stay close enough to this steady state that the linear approximation provides a decent representation of the true model. That assumption often fails. Christopher Carroll shows that a log linearized version of a simple model gives a misleading picture of consumer behavior when compared to the true nonlinear model and that even a second order approximation offers little improvement. This issue is especially pressing in a world where we know a clear nonlinearity is present in the form of the zero lower bound for nominal interest rates and once again we get “unpleasant properties” when we apply linear methods.
Anyone trying to do quantitative macroeconomics faces a rather unappealing choice. In order to make any statement about the magnitudes of economic shocks or policies, we need a method to discipline the numerical behavior of the model. If our ultimate goal is to explain the real world, it makes sense to want to use data to provide this discipline. And yet we know our model is wrong. We know that the data we see was not generated by anything close to the simple models that enable analytic (or numeric) tractability. We want to take the model seriously, but not too seriously. We want to use the data to discipline our model, but we don’t want to reject a model because it doesn’t fit all features of the data. How do we determine the right balance? I’m not sure there is a good answer.
Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.
A plot of real GDP in the United States since 1947 looks like this
Real GDP in the United States since 1947. Data from FRED
Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this
Real GDP against HP trend (smoothing parameter = 1600)
By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with
Deviations from HP trend
The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.
Let’s go back one more time to these graphs
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.
Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.
There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.
TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.
Part 4 in a series of posts on modern macroeconomics. Parts 1, 2, and 3 documented the revolution that transformed Keynesian economics into DSGE economics. This post begins my critique of that revolution, beginning with a discussion of its reliance on total factor productivity (TFP) shocks.
Robert Solow (Wikimedia Commons)
In my last post I mentioned that the real business cycle model implies technology shocks are the primary driver of business cycles. I didn’t, however, describe what these technology shocks actually are. To do that, I need to bring in the work of another Nobel Prize winning economist: Robert Solow.
The Solow Residual and Total Factor Productivity
Previous posts in this series have focused on business cycles, which encompass one half of macroeconomic theory. The other half looks at economic growth over a longer period. In a pair of papers written in 1956 and 1957, Solow revolutionized growth theory. His work attempted to quantitatively decompose the sources of growth. Challenging the beliefs of many economists at the time, he concluded that changes in capital and labor were relatively unimportant. The remainder, which has since been called the “Solow Residual” was attributed to “total factor productivity” (TFP) and interpreted as changes in technology. Concurrent research by Moses Abramovitz confirmed Solow’s findings, giving TFP 90% of the credit for economic growth in the US from 1870 to 1950. In the RBC model, the size of technology shocks is found by subtracting the contributions of labor and capital from total output. What remains is TFP.
“A Measure of Our Ignorance”
Because TFP is nothing more than a residual, it’s a bit of a stretch to call it technical change. As Abramovitz put it, it really is just “a measure of our ignorance,” capturing the leftover effects in reality that have not been captured by the simple production function assumed. He sums up the problem in a later paper summarizing his research
Standard growth accounting is based on the notion that the several proximate sources of growth that it identifies operate independently of one another. The implication of this assumption is that the contributions attributable to each can be added up. And if the contribution of every substantial source other than technological progress has been estimated, whatever of growth is left over – that is, not accounted for by the sum of the measured sources – is the presumptive contribution of technological progress
Moses Abramovitz (1993) – The Search for the Sources of Growth: Areas of Ignorance, Old and New p. 220
In other words, onlyif we have correctly specified the production function to include all of the factors that determine total output can we define what is left as technological progress. So what is this comprehensive production function that is supposed to encompass all of these factors? Often, it is simply
Y represents total output in the economy. All labor hours, regardless of the skill of the worker or the type of work done, are stuffed into L. Similarly, K covers all types of capital, treating diverse capital goods as a single homogeneous blob. Everything that doesn’t fit into either of these becomes part of A. This simple function, known as the Cobb-Douglas function, is used in various economic applications. Empirically, the Cobb-Douglas function matches some important features in the data, notably the constant shares of income accrued to both capital and labor. Unfortunately, it also appears to fit any data that has this feature, as Anwar Shaikh humorously points out by fitting it to fake economic data that is made to spell out the word HUMBUG. Shaikh concludes that the fit of the Cobb-Douglas function is a mathematical trick rather than a proper description of the fundamentals of production. Is it close enough to consider its residual an accurate measure of technical change? I have some doubts.
There are also more fundamental problems with aggregate production functions that will need to wait for a later post.
Does TFP Measure Technological Progress or Something Else?
The technical change interpretation of the Solow residual runs into serious trouble if there are other variables correlated with it that are not directly related to productivity, but that also affect output. Robert Hall tests three variables that possibly fit this criteria (military spending, oil prices, and the political party of the president), and finds that all three affect the residual, casting doubt on the technology interpretation.
Hall cites five possible reasons for the discrepancy between Solow’s interpretation and reality, but they are somewhat technical. If you are interested, take a look at the paper linked above. The main takeaway should be that the simple idealized production function is a bit (or a lot) too simple. It cuts out too many of the features of reality that are essential to the workings of a real economy.
TFP is Nothing More Than a Noisy Measure of GDP
The criticisms of the production function above are concerning, but subject to debate. We cannot say for sure whether the Cobb-Douglas, constant returns to scale formulation is close enough to reality to be useful. But there is a more powerful reason to doubt the TFP series. In my last post, I put up these graphs that appear to show a close relationship between the RBC model and the data.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Here’s another graph from the same paper
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
This one is not coming from a model at all. It simply plots TFP as measured as the residual described above against output. And yet the fit is still pretty good. In fact, looking at correlations with all of the variables shows that TFP alone “explains” the data about as well as the full model
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model. Second column shows the correlation from the full RBC model and the third with just the TFP series.
What’s going on here? Basically, the table above shows that the fit of the RBC model only works so well because TFP is already so close to the data. For all its talk of microfoundations and the importance of including the optimizing behavior of agents in a model, the simple RBC framework does little more than attempt to explain changes in GDP using a noisy measure of GDP itself.
Kevin Hoover and Kevin Salyer make this point in a revealing paper where they claim that TFP is nothing more than “colored noise.” To defend this claim, they construct a fake “Solow residual” by creating a false data series that shares similar statistical properties to the true data, but whose coefficients come from a random number generator rather than a production function. Constructed in this way, the new residual certainly does not have anything to do technology shocks, but feeding this false residual into the RBC model still provides an excellent fit to the data. Hoover and Salyer conclude
The relationship of actual output and model output cannot indicate that the model has captured a deep economic relationship; for there is no such relationship to capture. Rather, it shows that we are seeing a complicated version of a regression fallacy: output is regressed on a noisy version of itself, so it is no wonder that a significant relationship is found. That the model can establish such a relationship on simulated data demonstrates that it can do so with any data that are similar in the relevant dimensions. That it has done so for actual data hardly seems subject to further doubt.
Hoover and Salyer (1998) – Technology Shocks or Coloured Noise? Why real-business-cycle models cannot explain actual business cycles, p.316
Already the RBC framework appears to be on shaky ground, but I’m just getting started (my plan for this series seems to be constantly expanding – there’s even more wrong with macro than I originally thought). My next post will be a brief discussion of the filtering method used in many DSGE applications. I will follow that with an argument that the theoretical justification for using an aggregate production function (Cobb-Douglas or otherwise) is extremely weak. At some point I will also address rational expectations, the representative agent assumption, and why the newer DSGE models that attempt to fix some of the problems of the RBC model also fail.
Part 3 in a series of posts on modern macroeconomics. Part 1 looked at Keynesian economics and part 2 described the reasons for its death. In this post I will explain dynamic stochastic general equilibrium (DSGE) models, which began with the real business cycle (RBC) model introduced by Kydland and Prescott and have since become the dominant framework of modern macroeconomics.
Ed Prescott and Finn Kydland meet with George W. Bush after receiving the Nobel Prize in 2004 (Wikimedia Commons)
“What I am going to describe for you is a revolution in macroeconomics, a transformation in methodology that has reshaped how we conduct our science.” That’s how Ed Prescott began his Nobel Prize lecture after being awarded the prize in 2004. While he could probably benefit from some of Hayek’s humility, it’s hard to deny the truth in the statement. Lucas and Friedman may have demonstrated the failures of Keynesian models, but it wasn’t until Kydland and Prescott that a viable alternative emerged. Their 1982 paper, “Time to Build and Aggregate Fluctuations,” took the ideas of microfoundations and rational expectations and applied them to a flexible model that allowed for quantitative assessment. In the years that followed, their work formed the foundation for almost all macroeconomic research.
Real Business Cycle
The basic setup of a real business cycle (RBC) model is surprisingly simple. There is one firm that produces one good for consumption by one consumer. Production depends on two inputs, labor and capital, as well as the level of technology. The consumer chooses how much to work, how much to consume, and how much to save based on its preferences, the current wage, and interest rates. Their savings are added to the capital stock, which, combined with their choice of labor, determines how much the firm is able to produce.
There is no money, no government, no entrepreneurs. There is no unemployment (only optimal reductions in hours worked), no inflation (because there is no money), and no stock market (the one consumer owns the one firm). There are essentially none of the features that most economists before 1980 as well as non-economists today would consider critically important for the study of macroeconomics. So how are business cycles generated in an RBC model? Exclusively through shocks to the level of technology (if that seems strange it’s probably even worse than you expect – stay tuned for part 4). When consumers and firms see changes in the level of technology, their optimal choices change which then causes total output, the number of hours worked, and the level of consumption and investment to fluctuate as well. Somewhat shockingly, when the parameters are calibrated to match the data, this simple model does a good job capturing many of the features of measured business cycles. The following graphs (from Uhlig 2003) demonstrate a big reason for the influence of the RBC model.
Comparison of simple RBC model (including population growth and government spending shocks) and the data. Both simulated data (solid line) and true data (dashed) have been H-P filtered. Source: Harald Uhlig (2003): “How well do we understand business cycles and growth? Examining the data with a real business cycle model.”
Looking at those graphs, you might wonder why there is anything left for macroeconomists to do. Business cycles have been solved! However, as I will argue in part 4, the perceived closeness of model and data is largely an illusion. There are, in my opinion, fundamental issues with the RBC framework that render it essentially meaningless in terms of furthering our understanding of real business cycles.
The Birth of Dynamic Stochastic General Equilibrium Models
Although many economists would point to the contribution of the RBC model in explaining business cycles on its own, most would agree that its greater significance came from the research agenda it inspired. Kydland and Prescott’s article was one of the first of what would come to be called Dynamic Stochastic General Equilibrium (DSGE) models. They are dynamic because they study how a system changes over time and stochastic because they introduce random shocks. General equilibrium refers to the fact that the agents in the model are constantly maximizing (consumers maximizing utility and firms maximizing profits) and markets always clear (prices are set such that supply and demand are equal in each market in all time periods).
Due in part to the criticisms I will outline in part 4, DSGE models have evolved from the simple RBC framework to include many of the features that were lost in the transition from Keynes to Lucas and Prescott. Much of the research agenda in the last 30 years has aimed to resurrect Keynes’s main insights in microfounded models using modern mathematical language. As a result, they have come to be known as “New Keynesian” models. Thanks to the flexibility of the DSGE setup, adding additional frictions like sticky prices and wages, government spending, and monetary policy was relatively simple and has enabled DSGE models to become sufficiently close to reality to be used as guides for policymakers. I will argue in future posts that despite this progress, even the most advanced NK models fall short both empirically and theoretically.