Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.
A plot of real GDP in the United States since 1947 looks like this
Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this
By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with
The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.
Let’s go back one more time to these graphs
Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed
Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to
Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.
Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.
There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.
TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.
Part 4 in a series of posts on modern macroeconomics. Parts 1, 2, and 3 documented the revolution that transformed Keynesian economics into DSGE economics. This post begins my critique of that revolution, beginning with a discussion of its reliance on total factor productivity (TFP) shocks.
In my last post I mentioned that the real business cycle model implies technology shocks are the primary driver of business cycles. I didn’t, however, describe what these technology shocks actually are. To do that, I need to bring in the work of another Nobel Prize winning economist: Robert Solow.
The Solow Residual and Total Factor Productivity
Previous posts in this series have focused on business cycles, which encompass one half of macroeconomic theory. The other half looks at economic growth over a longer period. In a pair of papers written in 1956 and 1957, Solow revolutionized growth theory. His work attempted to quantitatively decompose the sources of growth. Challenging the beliefs of many economists at the time, he concluded that changes in capital and labor were relatively unimportant. The remainder, which has since been called the “Solow Residual” was attributed to “total factor productivity” (TFP) and interpreted as changes in technology. Concurrent research by Moses Abramovitz confirmed Solow’s findings, giving TFP 90% of the credit for economic growth in the US from 1870 to 1950. In the RBC model, the size of technology shocks is found by subtracting the contributions of labor and capital from total output. What remains is TFP.
“A Measure of Our Ignorance”
Because TFP is nothing more than a residual, it’s a bit of a stretch to call it technical change. As Abramovitz put it, it really is just “a measure of our ignorance,” capturing the leftover effects in reality that have not been captured by the simple production function assumed. He sums up the problem in a later paper summarizing his research
Standard growth accounting is based on the notion that the several proximate sources of growth that it identifies operate independently of one another. The implication of this assumption is that the contributions attributable to each can be added up. And if the contribution of every substantial source other than technological progress has been estimated, whatever of growth is left over – that is, not accounted for by the sum of the measured sources – is the presumptive contribution of technological progress
Moses Abramovitz (1993) – The Search for the Sources of Growth: Areas of Ignorance, Old and New p. 220
In other words, onlyif we have correctly specified the production function to include all of the factors that determine total output can we define what is left as technological progress. So what is this comprehensive production function that is supposed to encompass all of these factors? Often, it is simply
Y represents total output in the economy. All labor hours, regardless of the skill of the worker or the type of work done, are stuffed into L. Similarly, K covers all types of capital, treating diverse capital goods as a single homogeneous blob. Everything that doesn’t fit into either of these becomes part of A. This simple function, known as the Cobb-Douglas function, is used in various economic applications. Empirically, the Cobb-Douglas function matches some important features in the data, notably the constant shares of income accrued to both capital and labor. Unfortunately, it also appears to fit any data that has this feature, as Anwar Shaikh humorously points out by fitting it to fake economic data that is made to spell out the word HUMBUG. Shaikh concludes that the fit of the Cobb-Douglas function is a mathematical trick rather than a proper description of the fundamentals of production. Is it close enough to consider its residual an accurate measure of technical change? I have some doubts.
There are also more fundamental problems with aggregate production functions that will need to wait for a later post.
Does TFP Measure Technological Progress or Something Else?
The technical change interpretation of the Solow residual runs into serious trouble if there are other variables correlated with it that are not directly related to productivity, but that also affect output. Robert Hall tests three variables that possibly fit this criteria (military spending, oil prices, and the political party of the president), and finds that all three affect the residual, casting doubt on the technology interpretation.
Hall cites five possible reasons for the discrepancy between Solow’s interpretation and reality, but they are somewhat technical. If you are interested, take a look at the paper linked above. The main takeaway should be that the simple idealized production function is a bit (or a lot) too simple. It cuts out too many of the features of reality that are essential to the workings of a real economy.
TFP is Nothing More Than a Noisy Measure of GDP
The criticisms of the production function above are concerning, but subject to debate. We cannot say for sure whether the Cobb-Douglas, constant returns to scale formulation is close enough to reality to be useful. But there is a more powerful reason to doubt the TFP series. In my last post, I put up these graphs that appear to show a close relationship between the RBC model and the data.
Here’s another graph from the same paper
This one is not coming from a model at all. It simply plots TFP as measured as the residual described above against output. And yet the fit is still pretty good. In fact, looking at correlations with all of the variables shows that TFP alone “explains” the data about as well as the full model
What’s going on here? Basically, the table above shows that the fit of the RBC model only works so well because TFP is already so close to the data. For all its talk of microfoundations and the importance of including the optimizing behavior of agents in a model, the simple RBC framework does little more than attempt to explain changes in GDP using a noisy measure of GDP itself.
Kevin Hoover and Kevin Salyer make this point in a revealing paper where they claim that TFP is nothing more than “colored noise.” To defend this claim, they construct a fake “Solow residual” by creating a false data series that shares similar statistical properties to the true data, but whose coefficients come from a random number generator rather than a production function. Constructed in this way, the new residual certainly does not have anything to do technology shocks, but feeding this false residual into the RBC model still provides an excellent fit to the data. Hoover and Salyer conclude
The relationship of actual output and model output cannot indicate that the model has captured a deep economic relationship; for there is no such relationship to capture. Rather, it shows that we are seeing a complicated version of a regression fallacy: output is regressed on a noisy version of itself, so it is no wonder that a significant relationship is found. That the model can establish such a relationship on simulated data demonstrates that it can do so with any data that are similar in the relevant dimensions. That it has done so for actual data hardly seems subject to further doubt.
Hoover and Salyer (1998) – Technology Shocks or Coloured Noise? Why real-business-cycle models cannot explain actual business cycles, p.316
Already the RBC framework appears to be on shaky ground, but I’m just getting started (my plan for this series seems to be constantly expanding – there’s even more wrong with macro than I originally thought). My next post will be a brief discussion of the filtering method used in many DSGE applications. I will follow that with an argument that the theoretical justification for using an aggregate production function (Cobb-Douglas or otherwise) is extremely weak. At some point I will also address rational expectations, the representative agent assumption, and why the newer DSGE models that attempt to fix some of the problems of the RBC model also fail.
Part 3 in a series of posts on modern macroeconomics. Part 1 looked at Keynesian economics and part 2 described the reasons for its death. In this post I will explain dynamic stochastic general equilibrium (DSGE) models, which began with the real business cycle (RBC) model introduced by Kydland and Prescott and have since become the dominant framework of modern macroeconomics.
“What I am going to describe for you is a revolution in macroeconomics, a transformation in methodology that has reshaped how we conduct our science.” That’s how Ed Prescott began his Nobel Prize lecture after being awarded the prize in 2004. While he could probably benefit from some of Hayek’s humility, it’s hard to deny the truth in the statement. Lucas and Friedman may have demonstrated the failures of Keynesian models, but it wasn’t until Kydland and Prescott that a viable alternative emerged. Their 1982 paper, “Time to Build and Aggregate Fluctuations,” took the ideas of microfoundations and rational expectations and applied them to a flexible model that allowed for quantitative assessment. In the years that followed, their work formed the foundation for almost all macroeconomic research.
Real Business Cycle
The basic setup of a real business cycle (RBC) model is surprisingly simple. There is one firm that produces one good for consumption by one consumer. Production depends on two inputs, labor and capital, as well as the level of technology. The consumer chooses how much to work, how much to consume, and how much to save based on its preferences, the current wage, and interest rates. Their savings are added to the capital stock, which, combined with their choice of labor, determines how much the firm is able to produce.
There is no money, no government, no entrepreneurs. There is no unemployment (only optimal reductions in hours worked), no inflation (because there is no money), and no stock market (the one consumer owns the one firm). There are essentially none of the features that most economists before 1980 as well as non-economists today would consider critically important for the study of macroeconomics. So how are business cycles generated in an RBC model? Exclusively through shocks to the level of technology (if that seems strange it’s probably even worse than you expect – stay tuned for part 4). When consumers and firms see changes in the level of technology, their optimal choices change which then causes total output, the number of hours worked, and the level of consumption and investment to fluctuate as well. Somewhat shockingly, when the parameters are calibrated to match the data, this simple model does a good job capturing many of the features of measured business cycles. The following graphs (from Uhlig 2003) demonstrate a big reason for the influence of the RBC model.
Looking at those graphs, you might wonder why there is anything left for macroeconomists to do. Business cycles have been solved! However, as I will argue in part 4, the perceived closeness of model and data is largely an illusion. There are, in my opinion, fundamental issues with the RBC framework that render it essentially meaningless in terms of furthering our understanding of real business cycles.
The Birth of Dynamic Stochastic General Equilibrium Models
Although many economists would point to the contribution of the RBC model in explaining business cycles on its own, most would agree that its greater significance came from the research agenda it inspired. Kydland and Prescott’s article was one of the first of what would come to be called Dynamic Stochastic General Equilibrium (DSGE) models. They are dynamic because they study how a system changes over time and stochastic because they introduce random shocks. General equilibrium refers to the fact that the agents in the model are constantly maximizing (consumers maximizing utility and firms maximizing profits) and markets always clear (prices are set such that supply and demand are equal in each market in all time periods).
Due in part to the criticisms I will outline in part 4, DSGE models have evolved from the simple RBC framework to include many of the features that were lost in the transition from Keynes to Lucas and Prescott. Much of the research agenda in the last 30 years has aimed to resurrect Keynes’s main insights in microfounded models using modern mathematical language. As a result, they have come to be known as “New Keynesian” models. Thanks to the flexibility of the DSGE setup, adding additional frictions like sticky prices and wages, government spending, and monetary policy was relatively simple and has enabled DSGE models to become sufficiently close to reality to be used as guides for policymakers. I will argue in future posts that despite this progress, even the most advanced NK models fall short both empirically and theoretically.
Part 1 in a series of posts on modern macroeconomics. This post focuses on Keynesian economics in order to set the stage for my explanation of modern macro, which will begin in part 2.
If you’ve never taken a macroeconomics class, you almost certainly have no idea what macroeconomists do. Even if you have an undergraduate degree in economics, your odds of understanding modern macro probably don’t improve much (they didn’t for me at least. I had no idea what I was getting into when I entered grad school). The gap between what is taught in undergraduate macroeconomics classes and the research that is actually done by professional macroeconomists is perhaps larger than in any other field. Therefore, for those of you who made the excellent choice not to subject yourself to the horrors of a first year graduate macroeconomics sequence, I will attempt to explain in plain English (as much as possible), what modern macro is and why I think it could be better.
But before getting to modern macro itself, it is important to understand what came before. Keep in mind throughout these posts that the pretense of knowledge is quite strong here. For a much better exposition that is still somewhat readable for anyone with a basic economic background, Michael De Vroey has a comprehensive book on the history of macroeconomics. I’m working through it now and it’s very good. I highly recommend it to anyone who is interested in what I say in this series of posts.
Although Keynes was not the first to think about business cycles, unemployment, and other macroeconomic topics, it wouldn’t be too much of an exaggeration to say that macroeconomics as a field didn’t truly appear until Keynes published his General Theory in 1936. I admit I have not read the original book (but it’s on my list). My summary here will therefore be based on my undergraduate macro courses, which I think capture the spirit (but probably not the nuance) of Keynes.
Keynesian economics begins by breaking aggregate spending (GDP) into four pieces. Private spending consists of consumption (spending by households on goods and services) and investment (spending by firms on capital). Government spending on goods and services makes up the rest of domestic spending. Finally, net exports (exports minus imports) is added to account for foreign expenditures. In a Keynesian equilibrium, spending is equal to income. Consumption is assumed to be a fraction of total income, which means that any increase in spending (like an increase in government spending) will cause an increase in consumption as well.
An important implication of this setup is that increases in spending increase total income by more than the initial increase (called the multiplier effect). Assume that the government decides to build a new road that costs $1 million. This increase in expenditure immediately increases GDP by $1 million, but it also adds $1 million to the income of the people involved in building the road. Let’s say that all of these people spend 3/4 of their income and save the rest. Then consumption also increases by $750,000, which then becomes other people’s incomes, adding another $562,500, and the process continues. Some algebra shows that the initial increase of $1 million leads to an increase in GDP of $4 million. Similar results occur if the initial change came from investment or changes in taxes.
The multiplier effect also works in the other direction. If businesses start to feel pessimistic about the future, they might cut back on investment. Their beliefs then become self-fulfilling as the reduction in investment causes a reduction in consumption and aggregate spending. Although the productive resources in the economy have not changed, output falls and some of these resources become underutilized. A recession occurs not because of a change in economic fundamentals, but because people’s perceptions changed for some unknown reason – Keynes’s famous “animal spirits.” Through this mechanism, workers may not be able to find a job even if they would be willing to work at the prevailing wage rate, a phenomenon known as involuntary unemployment. In most theories prior to Keynes, involuntary unemployment was impossible because the wage rate would simply adjust to clear the market.
Keynes’s theory also opened the door for government intervention in the economy. If investment falls and causes unemployment, the government can replace the lost spending by increasing its own expenditure. By increasing spending during recessions and decreasing it during booms, the government can theoretically smooth the business cycle.
The above description is Keynes at its most basic. I haven’t said anything about monetary policy or interest rates yet, but both of these were essential to Keynes’s analysis. Unfortunately, although The General Theory was a monumental achievement for its time and probably the most rigorous analysis of the economy that had been written, it is not exactly the most readable or even coherent theory. To capture Keynes’s ideas in a more tractable framework, J.R. Hicks and other economists developed the IS-LM model.
I don’t want to give a full derivation of the IS-LM model here, but the basic idea is to model the relationship between interest rates and income. The IS (Investment-Savings) curve plots all of the points where the goods market is in equilibrium. Here we assume that investment depends negatively on interest rates (if interest rates are high, firms would rather put their money in a bank then invest in new projects). A higher interest rate then lowers investment and decreases total income through the same multiplier effect outlined above. Therefore we end up with a negative relationship between interest rates and income.
The LM (Liquidity Preference-Money) curve plots all of the points where the money market is in equilibrium. Here we assume that the money supply is fixed. Money demand depends negatively on interest rates (since a higher interest rate means you would rather keep money in the bank than in your wallet) and positively on income (more cash is needed to buy stuff). Together these imply that a higher level of income results in a lower interest rate required to clear the money market. An equilibrium in the IS-LM model comes when both the money market and the goods market are in equilibrium (the point where the two lines cross).
The above probably doesn’t make much sense if you haven’t seen it before. All you really need to know is that an increase in government spending or investment shifts the IS curve right, which increases both income and interest rates. If the central bank increases the money supply, the LM curve shifts right, increasing income and decreasing interest rates. Policymakers then have two powerful options to combat economic downturns.
In the decades following Keynes and Hicks, the IS-LM model grew to include hundreds or thousands of equations that economists attempted to estimate econometrically, but the basic features remained in place. However, in the 1970s, the Keynesian model came under attack due to both empirical and theoretical failures. Part 2 will deal with these failures and the attempts to solve them.