What’s Wrong With Modern Macro? Part 5 Filtering Away All Our Problems

Part 5 in a series of posts on modern macroeconomics. In this post, I will describe some of the issues with the Hodrick-Prescott (HP) filter, a tool used by many macro models to isolate fluctuations at business cycle frequencies.


A plot of real GDP in the United States since 1947 looks like this

Real GDP in the United States since 1947. Data from FRED
Real GDP in the United States since 1947. Data from FRED

Although some recessions are easily visible (like 2009), the business cycle is not especially easy to see. The growing trend of economic growth over time dominates more frequent business cycle fluctuations. While the trend is important, if we want to isolate more frequent business cycle fluctuations, we need some way to remove the trend. The HP Filter is one method of doing that. Unlike the trends in my last post, which simply fit a line to the data, the HP filter allows the trend to change over time. The exact equation is a bit complex, but here’s the link to the Wikipedia page if you’d like to see it. Plotting the HP trend against actual GDP looks like this

Real GDP against HP trend (smoothing parameter = 1600)
Real GDP against HP trend (smoothing parameter = 1600)

By removing this trend, we can isolate deviations from trend. Usually we also want to take a log of the data in order to be able to interpret deviations in percentages. Doing that, we end up with

Deviations from HP trend
Deviations from HP trend

The last picture is what the RBC model (and most other DSGE models) try to explain. By eliminating the trend, the HP filter focuses exclusively on short term fluctuations. This shift in focus may be an interesting exercise, but it eliminates much of what make business cycles important and interesting. Look at the HP trend in recent years. Although we do see a sharp drop marking the 08-09 recession, the trend quickly adjusts so that we don’t see the slow recovery at all in the HP filtered data. The Great Recession is actually one of the smaller movements by this measure. But what causes the change in the trend? The model has no answer to this question. The RBC model and other DSGE models that explain HP filtered data cannot hope to explain long periods of slow growth because they begin by filtering them away.

Let’s go back one more time to these graphs

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Notice that these graphs show the fit of the model to HP filtered data. Here’s what they look like when the filter is removed

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Not so good. And it gets worse. Remember we have already seen in the last post that TFP itself was highly correlated with each of these variables. If we remove the TFP trend, the picture changes to

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Yeah. So much for that great fit. Roger Farmer describes the deceit of the RBC model brilliantly in a blog post called Real Business Cycle and the High School Olympics. He argues that when early RBC modelers noticed a conflict between their model and the data, they didn’t change the model. They changed the data. They couldn’t clear the olympic bar of explaining business cycles, so they lowered the bar, instead explaining only the “wiggles.” But, as Farmer contends, the important question is in explaining those trends that the HP filter assumes away.

Maybe the most convincing reason that something is wrong with this method of measuring business cycles is that measurements of the cost of business cycles using this metric are tiny. Based on a calculation by Lucas, Ayse Imrohoroglu explains that under the assumptions of the model, an individual would need to be given $28.96 per year to be sufficiently compensated for the risk of business cycles. In other words, completely eliminating recessions is worth about as much as a couple decent meals. Obviously to anyone who has lived in a real economy this number is completely ridiculous, but when business cycles are relegated to the small deviations from a moving trend, there isn’t much to be gained from eliminating the wiggles.

There are more technical reasons to be wary of the HP filter that I don’t want to get into here. A recent paper called “Why You Should Never Use the Hodrick-Prescott Filter” by James Hamilton, a prominent econometrician, goes into many of the problems with the HP filter in detail. He opens by saying “The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.” If you don’t trust me, at least trust him.

TFP doesn’t measure productivity. HP filtered data doesn’t capture business cycles. So the RBC model is 0/2. It doesn’t get much better from here.

Was the Great Recession a Return to Trend?

As I was putting together some graphs for my post on HP filtering (coming soon), I started playing around with some real GDP data. First, as many have done before, I fit a simple linear trend to the log of real GDP data since 1947

Log Real GDP (blue) and fitted trendline (orange). Trend annualized growth rate = 3.25%. Data from FRED
Log Real GDP (blue) and fitted trendline (orange). Trend annualized growth rate = 3.25%. Data from FRED

Plotting the data in this way, the Great Recession is clearly visible and seems to have caused a permanent reduction in real GDP as the economy does not seem to be returning to its previous trend. Taking the log of GDP is nice because it allows us to interpret changes as percentage increases. Therefore, the linear trend implies a constant percentage growth rate for the economy. But is there any reason we should expect the economy to grow at a constant rate over time? What if growth is not exponential at all, and instead grows at a slowly decreasing rate over time? Here’s what happens when we take a square root of real GDP instead of a natural log

Square Root Real GDP (blue) and fitted trendline (orange). Data from FRED
Square Root Real GDP (blue) and fitted trendline (orange). Data from FRED

I have no theoretical justification for taking the square root of RGDP data, but it fits a linear trend remarkably well (slightly better than log data). And here the Great Recession is gone. Instead, we see a ten year period above trend from 1997-2007 followed by a sharp correction. If economic growth is following a quadratic growth pattern rather than an exponential one, it implies that the growth rate is decreasing over time rather than remaining constant. Here’s an illustration of what growth rates would look like in each scenario

Trend growth rates under the assumption of exponential growth (orange) and quadratic growth (blue)
Trend growth rates under the assumption of exponential growth (orange) and quadratic growth (blue)

Fitting one trend line doesn’t necessarily mean anything, so to better test which growth pattern seemed more plausible, I decided to try to fit each trend through 1986 and then use those trends to predict current RGDP. Actual annualized RGDP in 2016 Q2 was $16.525 trillion. Assuming exponential growth and fitting the data through 1986 predicts a much higher value of $23.353 trillion. A quadratic trend understates actual growth, but comes much closer, with a prediction of $14.670 trillion. I then did the same experiment using data through 1996 and through 2006 and plotted the results

Actual GDP (light blue) vs exponential trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs exponential trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs quadratic trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)
Actual GDP (light blue) vs quadratic trend through 1986 (gray), 1996 (yellow), 2006 (dark blue) and present (orange)

From these graphs, it appears that assuming quadratic growth, which implies a decreasing growth rate over time, would have done much better in predicting future GDP. Notice especially how close a GDP prediction would have been by simply extrapolating the quadratic trend forward by 10 years in 2006. Had the economy stayed on its 2006 trend through the present, RGDP in the last quarter would have been $16.583 trillion. It was actually $16.525 trillion for a difference of about $58 billion (off by 3 tenths of a percent). That is an astonishingly good prediction for a ten year ahead forecast (Just for fun, extending the present trend to 2026 Q2 predicts $20.200 trillion. Place your bets now).

Of course, this analysis doesn’t prove anything. It could just be a coincidence that the quadratic growth theory happens to fit better than the exponential. But I think it does show that we need to be careful in interpreting the Great Recession.

Using the above pictures, I can come up with two plausible stories. In the first, the Great Recession represents a significant deviation from trend. There was no fundamental reason why we needed to leave the path we were on in 2006 and we should do everything we can to try to get back to that path. Whether that comes through demand side monetary or fiscal stimulus or supply side reforms like cutting regulation and government intervention I will leave aside. But we should do something.

The other story is a bit more pessimistic. In this version, economic growth had been trending downwards for decades. We made a valiant effort to prop up growth on the strength of the internet in the late 1990s and housing in the early 2000s. But these gains were illusory, generated by unsustainable bubbles waiting to be popped rather than true economic progress. The recession, far from an unnecessary catastrophe, was an essential correction required for the economy to reorganize resources and adapt to the low growth reality. Enacting policies that attempt to restore previous growth rates will only fuel new bubbles to replace the old ones, paving the way for future recessions. Doing something will only make things worse.

Which story is more plausible? I wish I knew.

What’s Wrong With Modern Macro? Part 4 How Did a "Measure of our Ignorance" Become the Cause of Business Cycles?

Part 4 in a series of posts on modern macroeconomics. Parts 1, 2, and 3 documented the revolution that transformed Keynesian economics into DSGE economics. This post begins my critique of that revolution, beginning with a discussion of its reliance on total factor productivity (TFP) shocks.

Robert Solow (Wikimedia Commons)
Robert Solow (Wikimedia Commons)

In my last post I mentioned that the real business cycle model implies technology shocks are the primary driver of business cycles. I didn’t, however, describe what these technology shocks actually are. To do that, I need to bring in the work of another Nobel Prize winning economist: Robert Solow.

The Solow Residual and Total Factor Productivity

Previous posts in this series have focused on business cycles, which encompass one half of macroeconomic theory. The other half looks at economic growth over a longer period. In a pair of papers written in 1956 and 1957, Solow revolutionized growth theory. His work attempted to quantitatively decompose the sources of growth. Challenging the beliefs of many economists at the time, he concluded that changes in capital and labor were relatively unimportant. The remainder, which has since been called the “Solow Residual” was attributed to “total factor productivity” (TFP) and interpreted as changes in technology. Concurrent research by Moses Abramovitz confirmed Solow’s findings, giving TFP 90% of the credit for economic growth in the US from 1870 to 1950. In the RBC model, the size of technology shocks is found by subtracting the contributions of labor and capital from total output. What remains is TFP.

“A Measure of Our Ignorance”

Because TFP is nothing more than a residual, it’s a bit of a stretch to call it technical change. As Abramovitz put it, it really is just “a measure of our ignorance,” capturing the leftover effects in reality that have not been captured by the simple production function assumed. He sums up the problem in a later paper summarizing his research

Standard growth accounting is based on the notion that the several proximate sources of growth that it identifies operate independently of one another. The implication of this assumption is that the contributions attributable to each can be added up. And if the contribution of every substantial source other than technological progress has been estimated, whatever of growth is left over – that is, not accounted for by the sum of the measured sources – is the presumptive contribution of technological progress
Moses Abramovitz (1993) – The Search for the Sources of Growth: Areas of Ignorance, Old and New p. 220

In other words, only if we have correctly specified the production function to include all of the factors that determine total output can we define what is left as technological progress. So what is this comprehensive production function that is supposed to encompass all of these factors? Often, it is simply
Y=AK^\alpha L^{1-\alpha}

Y represents total output in the economy. All labor hours, regardless of the skill of the worker or the type of work done, are stuffed into L. Similarly, K covers all types of capital, treating diverse capital goods as a single homogeneous blob. Everything that doesn’t fit into either of these becomes part of A. This simple function, known as the Cobb-Douglas function, is used in various economic applications. Empirically, the Cobb-Douglas function matches some important features in the data, notably the constant shares of income accrued to both capital and labor. Unfortunately, it also appears to fit any data that has this feature, as Anwar Shaikh humorously points out by fitting it to fake economic data that is made to spell out the word HUMBUG. Shaikh concludes that the fit of the Cobb-Douglas function is a mathematical trick rather than a proper description of the fundamentals of production. Is it close enough to consider its residual an accurate measure of technical change? I have some doubts.

There are also more fundamental problems with aggregate production functions that will need to wait for a later post.

Does TFP Measure Technological Progress or Something Else?

The technical change interpretation of the Solow residual runs into serious trouble if there are other variables correlated with it that are not directly related to productivity, but that also affect output. Robert Hall tests three variables that possibly fit this criteria (military spending, oil prices, and the political party of the president), and finds that all three affect the residual, casting doubt on the technology interpretation.

Hall cites five possible reasons for the discrepancy between Solow’s interpretation and reality, but they are somewhat technical. If you are interested, take a look at the paper linked above. The main takeaway should be that the simple idealized production function is a bit (or a lot) too simple. It cuts out too many of the features of reality that are essential to the workings of a real economy.

TFP is Nothing More Than a Noisy Measure of GDP

The criticisms of the production function above are concerning, but subject to debate. We cannot say for sure whether the Cobb-Douglas, constant returns to scale formulation is close enough to reality to be useful. But there is a more powerful reason to doubt the TFP series. In my last post, I put up these graphs that appear to show a close relationship between the RBC model and the data.

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

Here’s another graph from the same paper

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.

This one is not coming from a model at all. It simply plots TFP as measured as the residual described above against output. And yet the fit is still pretty good. In fact, looking at correlations with all of the variables shows that TFP alone “explains” the data about as well as the full model

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model. Second column shows the correlation from the full RBC model and the third with just the TFP series
Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model. Second column shows the correlation from the full RBC model and the third with just the TFP series.

What’s going on here? Basically, the table above shows that the fit of the RBC model only works so well because TFP is already so close to the data. For all its talk of microfoundations and the importance of including the optimizing behavior of agents in a model, the simple RBC framework does little more than attempt to explain changes in GDP using a noisy measure of GDP itself.

Kevin Hoover and Kevin Salyer make this point in a revealing paper where they claim that TFP is nothing more than “colored noise.” To defend this claim, they construct a fake “Solow residual” by creating a false data series that shares similar statistical properties to the true data, but whose coefficients come from a random number generator rather than a production function. Constructed in this way, the new residual certainly does not have anything to do technology shocks, but feeding this false residual into the RBC model still provides an excellent fit to the data. Hoover and Salyer conclude

The relationship of actual output and model output cannot indicate that the model has captured a deep economic relationship; for there is no such relationship to capture. Rather, it shows that we are seeing a complicated version of a regression fallacy: output is regressed on a noisy version of itself, so it is no wonder that a significant relationship is found. That the model can establish such a relationship on simulated data demonstrates that it can do so with any data that are similar in the relevant dimensions. That it has done so for actual data hardly seems subject to further doubt.
Hoover and Salyer (1998) – Technology Shocks or Coloured Noise? Why real-business-cycle models cannot explain actual business cycles, p.316

Already the RBC framework appears to be on shaky ground, but I’m just getting started (my plan for this series seems to be constantly expanding – there’s even more wrong with macro than I originally thought). My next post will be a brief discussion of the filtering method used in many DSGE applications. I will follow that with an argument that the theoretical justification for using an aggregate production function (Cobb-Douglas or otherwise) is extremely weak. At some point I will also address rational expectations, the representative agent assumption, and why the newer DSGE models that attempt to fix some of the problems of the RBC model also fail.

 

What’s Wrong with Modern Macro? Part 3 Real Business Cycle and the Birth of DSGE Models

Part 3 in a series of posts on modern macroeconomics. Part 1 looked at Keynesian economics and part 2 described the reasons for its death. In this post I will explain dynamic stochastic general equilibrium (DSGE) models, which began with the real business cycle (RBC) model introduced by Kydland and Prescott and have since become the dominant framework of modern macroeconomics.


Ed Prescott and Finn Kydland meet with George W. Bush after receiving the Nobel Prize in 2004
Ed Prescott and Finn Kydland meet with George W. Bush after receiving the Nobel Prize in 2004 (Wikimedia Commons)

“What I am going to describe for you is a revolution in macroeconomics, a transformation in methodology that has reshaped how we conduct our science.” That’s how Ed Prescott began his Nobel Prize lecture after being awarded the prize in 2004. While he could probably benefit from some of Hayek’s humility, it’s hard to deny the truth in the statement. Lucas and Friedman may have demonstrated the failures of Keynesian models, but it wasn’t until Kydland and Prescott that a viable alternative emerged. Their 1982 paper, “Time to Build and Aggregate Fluctuations,” took the ideas of microfoundations and rational expectations and applied them to a flexible model that allowed for quantitative assessment. In the years that followed, their work formed the foundation for almost all macroeconomic research.

Real Business Cycle

The basic setup of a real business cycle (RBC) model is surprisingly simple. There is one firm that produces one good for consumption by one consumer. Production depends on two inputs, labor and capital, as well as the level of technology. The consumer chooses how much to work, how much to consume, and how much to save based on its preferences, the current wage, and interest rates. Their savings are added to the capital stock, which, combined with their choice of labor, determines how much the firm is able to produce.

There is no money, no government, no entrepreneurs. There is no unemployment (only optimal reductions in hours worked), no inflation (because there is no money), and no stock market (the one consumer owns the one firm). There are essentially none of the features that most economists before 1980 as well as non-economists today would consider critically important for the study of macroeconomics. So how are business cycles generated in an RBC model? Exclusively through shocks to the level of technology (if that seems strange it’s probably even worse than you expect – stay tuned for part 4). When consumers and firms see changes in the level of technology, their optimal choices change which then causes total output, the number of hours worked, and the level of consumption and investment to fluctuate as well. Somewhat shockingly, when the parameters are calibrated to match the data, this simple model does a good job capturing many of the features of measured business cycles. The following graphs (from Uhlig 2003) demonstrate a big reason for the influence of the RBC model.

Source: Harald Uhlig (2003): How well do we understand business cycles and growth? Examining the data with a real business cycle model.
Comparison of simple RBC model (including population growth and government spending shocks) and the data. Both simulated data (solid line) and true data (dashed) have been H-P filtered. Source: Harald Uhlig (2003): “How well do we understand business cycles and growth? Examining the data with a real business cycle model.”

Looking at those graphs, you might wonder why there is anything left for macroeconomists to do. Business cycles have been solved! However, as I will argue in part 4, the perceived closeness of model and data is largely an illusion. There are, in my opinion, fundamental issues with the RBC framework that render it essentially meaningless in terms of furthering our understanding of real business cycles.

The Birth of Dynamic Stochastic General Equilibrium Models

Although many economists would point to the contribution of the RBC model in explaining business cycles on its own, most would agree that its greater significance came from the research agenda it inspired. Kydland and Prescott’s article was one of the first of what would come to be called Dynamic Stochastic General Equilibrium (DSGE) models. They are dynamic because they study how a system changes over time and stochastic because they introduce random shocks. General equilibrium refers to the fact that the agents in the model are constantly maximizing (consumers maximizing utility and firms maximizing profits) and markets always clear (prices are set such that supply and demand are equal in each market in all time periods).

Due in part to the criticisms I will outline in part 4, DSGE models have evolved from the simple RBC framework to include many of the features that were lost in the transition from Keynes to Lucas and Prescott. Much of the research agenda in the last 30 years has aimed to resurrect Keynes’s main insights in microfounded models using modern mathematical language. As a result, they have come to be known as “New Keynesian” models. Thanks to the flexibility of the DSGE setup, adding additional frictions like sticky prices and wages, government spending, and monetary policy was relatively simple and has enabled DSGE models to become sufficiently close to reality to be used as guides for policymakers. I will argue in future posts that despite this progress, even the most advanced NK models fall short both empirically and theoretically.

Links 8-30-16

Must Read: Scott Alexander on EpiPen pricing

Economists on whether Glass-Steagall could have prevented the financial crisis. Alex Tabarrok: “When Black Lives Matter calls for a restoration of the Glass-Steagall Act we know that the Act has exited the realm of policy and entered that of mythology.”

Matt Ridley says “Economic liberty is out of fashion.”

Noah Smith thinks free market ideology has “hit a wall in terms of effectiveness”

How can those two viewpoints be reconciled? I share Bryan Caplan’s confusion

Milton Friedman on Free Trade

I was going to do a post on free trade, but then I remembered Milton Friedman is way better at explaining it then I will ever be. If you’re worried about the size of the trade deficit or Chinese workers stealing our jobs, watch this video. There are more sophisticated arguments against free trade (mostly about distributional issues), but the ones coming from politicians today are thoroughly debunked by Friedman here.

Why Nominal GDP Targeting?

Nominal GDP (in logs) and approximate 6% trend since 1982
Nominal GDP (in logs) and approximate 6% trend since 1982

Probably the best example that blogging can have a significant influence on economic thinking has been Scott Sumner’s advocacy of nominal GDP (NGDP) targeting at his blog The Money Illusion (and more recently EconLog). Scott has almost singlehandedly transformed NGDP targeting from an obscure idea to one that is widely known and increasingly supported. Before getting to Scott’s proposals, let me give a very brief history of monetary policy in the United States over the last 30 years.

From 1979-1982, the influence of Milton Friedman led the Federal Reserve to attempt a policy of constant monetary growth. In other words, rather than attempt to react to economic fluctuations, the central bank would simply increase the money supply by the same rate each year. Friedman’s so called “k-percent” rule failed to promote economic stability, but the principle behind it continued to influence policy. In particular, it ushered in an era of rules rather than discretion. Under Alan Greenspan, the Fed adjusted policy to changes in inflation and output in a formulaic fashion. Studying these policies, John Taylor developed a simple mathematical equation (now called the “Taylor Rule”) that accurately predicted Fed behavior based on the Fed’s responses to macro variables. The stability of the economy from 1982-2007 led to the period being called “the Great Moderation” and Greenspan dubbed “the Maestro.” Monetary policy appeared to have been solved. Of course, everybody knows what happened next.

On Milton Friedman’s 90th birthday, former Federal Reserve chairman Ben Bernanke gave a speech where, on the topic of the Great Depression, he said to Friedman, “You’re right, we did it. We’re very sorry. But thanks to you, we won’t do it again.” According to Scott Sumner, however, that’s exactly what happened. Many explanations have been given for the Great Recession, but Scott believes most of the blame can be placed on the Fed for failing to keep nominal GDP on trend.

To defend his NGDP targeting proposal, Scott outlines a simple “musical chairs” model of unemployment. He describes that model here and here. Essentially the idea is that the total number of jobs in the economy can be approximated by dividing NGDP by the average wage (which would be exactly true if we replace NGDP by labor income, but they are highly correlated). If NGDP falls and wages are slow to adjust, the total number of jobs must decrease, leaving some workers unemployed (like taking away chairs in musical chairs). Roger Farmer has demonstrated a close connection between NGDP/W and unemployment in the data, which appears to support this intuition

NGDP_Unemployment
Source: Farmer (2012) – The stock market crash of 2008 caused the Great Recession: Theory and evidence

We can also think about the differences in a central bank’s response to shocks under inflation and NGDP targeting. Imagine a negative supply shock (for example an oil shock) hits the economy. In a simple AS-AD model, this shock would be expected to increase prices and reduce output. Under an inflation targeting policy regime, the central bank would only see the higher prices and tighten monetary policy, deepening the drop in output even more. However, since output and inflation move in opposite directions, the change in NGDP (which is just the price level times output), would be much lower, meaning the central bank would respond less to a supply shock. Conversely, a negative demand shock would cause both prices and output to drop leading to a loosening of policy under both inflation and NGDP targeting. In other words, the central bank would respond strongly to demand shocks, while allowing the market to deal with the effects of supply shocks. Since a supply shock actually reduces the productive capacity of an economy, we should not expect a central bank to be able to effectively combat supply shocks. An NGDP target ensures that they will not try to.

Nick Rowe offers another simple defense of NGDP targeting. He shows that Canadian inflation data gives no indication of any changes occurring in 2008. Following a 2% inflation target, it looks like the central bank did everything it should to keep the economy on track

CanadaCPI

Looking at a graph of NGDP makes the recession much more obvious. Had the central bank instead been following a policy of targeting NGDP, they would have needed to be much more active to stay on target. Nick therefore calls inflation “the dog that didn’t bark.” Since inflation failed to warn us a recession was imminent, perhaps NGDP is a better indicator of monetary tightness.

CanadaNGDP

Of course, we also need to think about whether a central bank would even be able to hit its NGDP target if it decided to adopt one. Scott has a very interesting idea on the best way to stay on target. I will soon have another post detailing that proposal.

Quote of the Day 8-23-16

All ownership derives from occupation and violence. When we consider the natural components of goods, apart from the labour components they contain, and when we follow the legal title back, we must necessarily arrive at a point where this title originated in the appropriation of goods accessible to all. Before that we may encounter a forcible expropriation from a predecessor whose ownership we can in its turn trace to earlier appropriation or robbery. That all rights derive from violence, all ownership from appropriation or robbery, we may freely admit to those who oppose ownership on considerations of natural law. But this offers not the slightest proof that the abolition of ownership is necessary, advisable, or morally justified.
Ludwig von Mises, Socialism, 1922

What’s Wrong With Modern Macro? Part 2 The Death of Keynesian Economics: The Lucas Critique, Microfoundations, and Rational Expectations

Part 2 in a series of posts on modern macroeconomics. Part 1 covered Keynesian economics, which dominated macroeconomic thinking for around thirty years following World War II. This post will deal with the reasons for the demise of the Keynesian consensus and introduce some of the key components of modern macro.


The Death of Keynesian Economics

Milton Friedman - Wikimedia Commons
Milton Friedman – Wikimedia Commons

Although Keynes had contemporary critics (most notably Hayek, if you haven’t seen the Keynes vs Hayek rap videos, stop reading now and go watch them here and here), these criticisms generally remained outside of the mainstream. However, a more powerful challenger to Keynesian economics arose in the 1950s and 60s: Milton Friedman. Keynesian theory offered policymakers a set of tools that they could use to reduce unemployment. A key empirical and theoretical result was the existence of a “Phillips Curve,” which posited a tradeoff between unemployment and inflation. Keeping unemployment under control simply meant dealing with slightly higher levels of inflation.

Friedman (along with Edmund Phelps), argued that this tradeoff was an illusion. Instead, he developed the Natural Rate Hypothesis. In his own words, the natural rate of unemployment is

the level that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labour and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.
Milton Friedman, “The Role of Monetary Policy,” 1968

If you don’t speak economics, the above essentially means that the rate of unemployment is determined by fundamental economic factors. Governments and central banks cannot do anything to influence this natural rate. Trying to exploit the Phillips curve relationship by increasing inflation would fail because eventually workers would simply factor the inflation into their decisions and restore the previous rate of employment at a higher level of prices. In the long run, printing money could do nothing to improve unemployment.

Friedman’s theories couldn’t have come at a better time. In the 1970s, the United States experienced stagflation, high levels of both inflation and unemployment, an impossibility in the Keynesian models, but easily explained by Friedman’s theory. The first blow to Keynesian economics had been dealt. It would not survive the fight that was still to come.

The Lucas Critique

While Friedman may have provided the first blow, Robert Lucas landed the knockout punch on Keynesian economics. In a 1976 article, Lucas noted that standard economic models of the time assumed people were excessively naive. The rules that were supposed to describe their behavior were invariant to policy changes. Even when they knew about a policy in advance, they were unable to use the information to improve their forecasts, ensuring that those forecasts would be wrong. In Lucas’s words, they made “correctibly incorrect” forecasts.

The Lucas Critique was devastating for the Keynesian modeling paradigm of the time. By estimating the relationships between aggregate variables based on past data, these models could not hope to capture the changes in individual actions that occur when the economy changes. In reality, people form their own theories about the economy (however simple they may be) and use those theories to form expectations about the future. Keynesian models could not allow for this possibility.

Microfoundations

At the heart of the problem with Keynesian models prior to the Lucas critique was their failure to incorporate individual decisions. In microeconomics, economic models almost always begin with a utility maximization problem for an individual or a profit maximization problem for a firm. Keynesian economics attempted to skip these features and jump straight to explaining output, inflation, and other aggregate features of the economy. The Lucas critique demonstrated that this strategy produced some dubious results.

The obvious answer to the Lucas critique was to try to explicitly build up a macroeconomic model from microeconomic fundamentals. Individual consumers and firms returned once again to macroeconomics. Part 3 will explore some specific microfounded models in a bit more detail. But first, I need to explain one other idea that is essential to Lucas’s analysis and to almost all of modern macroeconomics: rational expectations

Rational Expectations

Think about a very simple economic story where a firm has to decide how much of a good to produce before they learn the price (such a model is called a “cobweb model”). Clearly, in order to make this decision a firm needs to form expectations about what the price will be (if they expect a higher price they will want to produce more). In most models before the 1960s, firms were assumed to form these expectations by looking at the past (called adaptive expectations). The problem with this assumption is that it creates predictable forecast errors.

For example, assume that all firms simply believe that today’s price will be the same as yesterday. Then when the price yesterday was high, firms produce a lot, pushing down the price today. Then tomorrow, firms expect a low price so don’t produce very much, which pushes the price back up. A smart businessman would quickly realize that their expectations are always wrong and try to find a new way to predict future prices.

A similar analysis led John Muth to introduce the idea of rational expectations in a 1961 paper. Essentially, Muth argued that if economic theory predicted a difference between expectation and result, the people in the model should be able to do the same. As he describes, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” Since only the best firms would survive, in the end expectations will be “essentially the same as the predictions of the relevant economic theory.”

Lucas took Muth’s idea and applied it to microfounded macroeconomic models. Agents in these new models were aware of the structure of the economy and therefore could not be fooled by policy changes. When a new policy is announced, these agents could anticipate the economic changes that would occur and adjust their own behavior. Microfoundations and rational expectations formed the foundation of a new kind of macroeconomics. Part 3 will discuss the development of Dynamic Stochastic General Equilibrium (DSGE) models, which have dominated macroeconomics for the last 30 years (and then in part 4 I promise I will actually get to what’s wrong with modern macro).

Links 8-21-16

“In the land of the free, where home ownership is a national dream, borrowing to buy a house is a government business for which taxpayers are on the hook.”

“If you think that there has never been a better time to be alive — that humanity has never been safer, healthier, more prosperous or less unequal — then you’re in the minority. But that is what the evidence incontrovertibly shows.”

Noah Smith criticizes heterodox macro

And then addresses some of the responses

The Solution to High Rents: Build More Houses. Who would have thought?