What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.


A plot of real GDP in the United States since 1947 looks like this

Real GDP in the United States since 1947. Data from FRED
Real GDP in the United States since 1947. Data from FRED

Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this

Real GDP against HP trend (smoothing parameter = 1600)
Real GDP against HP trend (smoothing parameter = 1600)

By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with

Deviations from HP trend
Deviations from HP trend

The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.

Let’s go back one more time to these graphs

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.

Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.

There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.

TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.

4 thoughts on “What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

  1. “Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data.”

    Try using a larger lambda. Won’t help the RBC model but you can leave a lot more of the variations you want to explain intact with a more aggressive filter.

    1. I agree but I think this just points to another of the flaws with the HP filter. The choice of lambda doesn’t really have much theoretical basis and appears to be essentially chosen to rig the data in favor of the model (I could be wrong here, but that’s the impression I get). I chose 1600 because that’s the value most commonly used for quarterly data and it’s what Uhlig uses for his graphs.

      1. When I use the filter, I eyeball whether a number within an order of magnitude looks like it leaves the residual looking approximately like it has the same frequency of variations as the unemployment rate. Granted I don’t use the data to optimize models but to illustrate variations in various variables related to the business cycle. 1600 was recommended by Hodrick and Prescott for quarterly data and it looks like it stuck around as a convention. When I tried to use the recommended value for monthly data on the Establishment Survey employment series(es) it basically didn’t separate out the business cycle at all and was just, visually obviously too small a lambda. I’ve since come to the conclusion that the values originally recommended are nonsense, and when making use of the statistical tool you have to exercise judgment and common sense-even if these are not “scientifically rigorous.”

Leave a Reply

Your email address will not be published. Required fields are marked *