Interesting Paper on Inequality and Fairness

As a followup to my recent post on inequality, I wanted to highlight some recent research by Christina Starmans, Mark Sheskin, and Paul Bloom on fairness and inequality. Based on a survey of lab experiments and evidence from the real world, the paper argues that people don’t actually care about unequal outcomes as long as they are perceived as fair.

They highlight several studies that show that in laboratory settings people (even children) are likely to distribute resources equally. However, in many of these settings, equality and fairness are indistinguishable. Since none of the participants did anything to deserve a larger portion, participants could simply be attempting to create a fair distribution rather than an equal one. And experiments that explicitly distinguish between fairness and equality do find that people care more about the former. For example, people were not unhappy with allocations that were determined randomly even if the outcome ended up being unequal as long as everybody began with an equal opportunity. Children who were asked to allocate erasers as a reward for cleaning their rooms were more likely to give the erasers to those who did a good job.

In reality people also seem to prefer an unequal distribution of income as long as it is perceived to be fair. In surveys, while people’s perception of the true income distribution is often highly skewed, their ideal distribution is not one of perfect equality. Of course, looking at these surveys does not necessarily tell us much about what the “best” income distribution would be, but rather the one people (think they) prefer. As I argued in my last post, I think too much weight has been placed on income or wealth inequality when really all that matters are differences in people’s happiness or utility. The evidence presented here does not go that far, but it does suggest that people realize that different behavior should lead to different rewards in some cases.

One reason that I think the debate has focused mostly on income or wealth inequality rather than on fairness or another measure of inequity is due to issues with measurement. Everybody has different ideas about what is fair so it’s easier to frame the question in terms of something that can be easily reported numerically. We may want to reconsider our acceptance of those statistics as a meaningful representation of a social problem. The whole paper is well worth reading and it opens up some interesting questions about human behavior. I will have at least one more post related to inequality coming in the next week or so.

What Kind of Inequality Matters?

Thomas Piketty’s book, Capital in the Twenty-First Century, a thorough analysis of the causes and effects of inequality, recently became an international best-seller. It’s not often that thousand page economic treatises attract popular attention, so clearly there’s something important to discuss here. Looking at some of the data on inequality, it’s not hard to see why many people are concerned. Here’s a chart showing the share of income held by the top 10% in the United States since 1910:

Notice where the two peaks occur – 1929 and 2009. I seem to recall something important happening in each of those years. Whether inequality was a symptom or a cause of the broader problems that led to the Great Depression and the Great Recession is an interesting question and definitely deserves scrutiny. For the purposes of this post, however, I want to address a simpler topic. Should we care about inequality on its own? And, more specifically, what kind of inequality should we care about?

For the first question, let’s do a simple thought experiment. You can choose to live in one of two societies. In Society A, everybody makes $50,000 per year no matter what their profession is. LeBron James and a janitor get paid the same amount. In Society B, average income is the same $50,000 per year, but it is now dispersed, so that some people earn less than average and some earn far more. Now assume that you are guaranteed to begin at average income (to avoid questions of risk aversion). Which society would you rather live in?

The answer to the hypothetical depends in part on whether you care about absolute or relative income. Does it matter if you are rich, or does it only matter if you are richer than others? In Society A everybody is on the same level, which might seem to be an appealing feature.

Except as soon as we start to think a bit harder, we realize that people in Society A aren’t equal at all. At least to some extent, differences in income do come from differences in effort. Some people work harder than others. Should people get paid the same regardless of effort?

Of course, this reasoning attacks a bit of a straw man. Hardly anybody would argue for full equality of income. But that doesn’t mean that there aren’t some situations where reducing income inequality could be helpful. I don’t believe in free will, which means that I think that where you are today is determined by circumstances you had no control over. But even with free will, it’s impossible to deny that some people are luckier than others. Some people are born into families with higher incomes or better connections. Some people are just smarter, or more talented. Two people can put in the same amount of effort and come out with wildly different outcomes. Isn’t there some justification for correcting these kinds of inequalities?

Now we need to bring in the second question: what kind of inequality matters? To this point, I have focused entirely on income inequality, but money is only as good as what you can buy with it. Somebody who earns $1,000,000 per year but saves $950,000 is no better off than someone who earns $50,000 per year (until they start spending those savings of course). We also need to consider a dynamic component to inequality. The chart above shows only a snapshot of inequality at one point in time, but there is large variation in earnings over a person’s lifetime. So a better measure of the kind of inequality that actually matters would be total lifetime consumption inequality (due to measurement difficulties, the question of whether consumption and income inequality move together is still under debate – see a nice survey here).

But we’re still not quite there. Why do we consume anything? Presumably because it makes us happy, or, in the words of an economist, because it gives us utility. Simply giving people more stuff might not actually help them at all unless it’s stuff they actually want. So shouldn’t we actually care about total lifetime utility? And as soon as we jump into the world of utility, the problem gets much more difficult.

Consider an extremely wealthy person. Incredibly talented and smart, he excelled in school, founded a business, and became one of the most successful CEOs in the world. He has a beautiful house, ten expensive cars, flat screen TVs, season tickets to the Patriots. He can buy anything you could ever want. Except he works all the time, hates his job, and has no time for his family or friends. Despite his money, despite his consumption, he is miserable.

Another individual earns far less. She isn’t poor, but she earns right around median income. She doesn’t have a luxurious life, but she can afford the basics. More importantly, she’s happy. She has a loving family, great friends, a job she likes. Would she be happier with more income? Probably. But she doesn’t need luxuries to live a good life.

How do we make this society more equal? Simply looking at income would suggest a transfer from the wealthy man to the average income woman. This transfer would of course reduce income inequality, but it would increase utility inequality. The woman is already pretty happy and the man is not. Taking money from him and giving it to her would only increase the happiness gap. Is this outcome desirable? I don’t think so.

Then maybe we should try to minimize utility inequality. But how? Taking money from the woman would probably reduce the woman’s utility and eventually it would be as low as the man’s, but giving it to the man would probably do little to increase the man’s utility unless he takes comfort in the fact that others are as miserable as he is. The woman’s happiness comes from pieces of her life that can’t be transferred to others. Despite being born with all the skills necessary to succeed, the man would likely view the woman as the more fortunate one.

In general, trying to equalize utility gives some strange implications. Let give a few more examples.

Two people work in the exact same job and get paid the same wage. Seems perfectly fair. But what if one of them enjoys working and the other hates it? In dollars per hour, they are equal. In utility per hour, one receives more than the other. Reducing utility inequality would require that people who enjoy their jobs be paid less for the same work.

Some people prefer living in cities while others would prefer to live in smaller towns. Houses in cities are usually much more expensive, which means to achieve the same utility, a city lover will have to pay far more. In this case, income equality greatly benefits people who hate cities. Utility equality would suggest transfers from people who love rural areas to those who love cities.

Consumption equality could also generate large utility inequality. If one person places a lot of meaning on material goods while another values other aspects of their life, they would need different levels of consumption in order to achieve the same utility. Should we give more to the materialist than the ascetic simply because giving to the latter wouldn’t help them anyway?

And even these examples ignore the largest problem with trying to achieve equality in utility – it’s difficult to measure and impossible to compare across individuals. I have trouble defining my own preferences and determining what makes me the happiest, I certainly don’t trust others to do that for me.

So utility equality is probably not an option even if it were desirable. But income equality almost certainly worsens the problem of utility inequality. The people who make a lot of money are much more likely to also be people who place a high value on money. Those who earn less are more likely to enjoy a simpler life. In fact, there is little evidence that the rich are any happier than the rest of us. Taking their money makes them even worse off while helping those who are already pretty happy despite their relatively low income. The happy get happier while the miserable get more miserable.

Notice that I have deliberately avoided using examples with truly poor people. I can certainly see an argument for redistributing income to the poorest. Nobody should have to live at subsistence levels if they are willing to work. But being concerned about poverty and being concerned about inequality are not the same. It is possible for a society to have zero poor people and still be incredibly unequal and also possible to be almost perfectly equal with everybody poor (as it was for most of the history of human existence).

Have we gotten any closer to answering the original question? What kind of inequality should we care about? If you’ve made it this far, it should be clear that there isn’t an easy answer. We often use the term “less fortunate” as a euphemism for poor people and that almost exclusively refers to poverty in a monetary context. We view income as if it came from a lottery and then aim to use redistribution to correct for discrepancies. Why is that? Aren’t people that can be happy despite low income really the most fortunate? Isn’t money just one of many factors that matter for a person’s happiness? And aren’t many of these other factors difficult to measure and even more difficult to redistribute?

If we answer yes to the above questions, reducing the kind of inequality we care about becomes a much harder task. Can we really correct the deeper inequalities that arise due to people’s preferences and talents – some of which will lead to higher incomes and some not? Or should we accept that inequality is an essential part of society, accept that treating everyone equally necessarily produces inequality in outcomes, that differences in wealth don’t necessarily lead to differences in happiness, and that correcting differences in happiness is almost impossible?

What’s Wrong With Modern Macro? Part 13 No Other Game in Town

Part 13 in a series of posts on modern macroeconomics. Previous posts in this series have pointed out many problems with DSGE models. This post aims to show that these problems have either been (in my opinion wrongly) dismissed or ignored by most of the profession. 


“If you have an interesting and coherent story to tell, you can tell it in a DSGE model. If you cannot, your story is incoherent.”

The above quote comes from a testimony given by prominent Minnesota macroeconomist V.V. Chari to the House of Representatives in 2010 as they attempted to determine how economists could have missed an event as large as the Great Recession. Chari argues that although macroeconomics does have room to improve, it has made substantial progress in the last 30 years and there is nothing fundamentally wrong with its current path.

Of course, not everybody has been so kind to macroeconomics. Even before the crisis, prominent macroeconomists had begun voicing some concerns about the current state of macroeconomic research. Here’s Robert Solow in 2003:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. (I mention Pigou to disabuse everyone of the notion that this is some specifically Keynesian thing.) Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual–and, to some extent, market–behavior. But some formalization of the connection was a good idea. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?
Solow (2003) – Dumb and Dumber in Macroeconomics

Olivier Blanchard in 2008:

There is, however, such a thing as too much convergence. To caricature, but only slightly: A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, firms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.

Such articles can be great, and the best ones indeed are. But, more often than not, they suffer from some of the flaws I just discussed in the context of DSGEs: Introduction of an additional ingredient in a benchmark model already loaded with questionable assumptions. And little or no independent validation for the added ingredient.
Olivier Blanchard (2008) – The State of Macro

And more recently, the vitriolic critique of Paul Romer:

Would you want your child to be treated by a doctor who is more committed to his friend the anti-vaxer and his other friend the homeopath than to medical science? If not, why should you expect that people who want answers will keep paying attention to economists after they learn that we are more committed to friends than facts.
Romer (2016) – The Trouble with Macroeconomics

These aren’t empty critiques by no-name graduate student bloggers. Robert Solow and Paul Romer are two of the biggest names in growth theory in the last 50 years. Olivier Blanchard was the chief economist at the IMF. One would expect that these criticisms would cause the profession to at least strongly consider a re-evaluation of its methods. Looking at the recent trends in the literature as well as my own experience doing macroeconomic research, it hasn’t. Not even close. Instead, it seems to have doubled down on the DSGE paradigm, falling much closer to Chari’s point of view that “there is no other game in town.”

But it’s even worse than that. Taken at its broadest, sticking to DSGE models is not too restrictive. Dynamic simply means that the models have a forward looking component, which is obviously an important feature to include in a macroeconomic model. Stochastic means there should be some randomness, which again is probably a useful feature (although I do think deterministic models can be helpful as well – more on this later). General equilibrium is a little harder to swallow, but it still provides a good measure of flexibility.

Even within the DSGE framework, however, straying too far from the accepted doctrines of macroeconomics is out of the question. Want to write a paper that deviates from rational expectations? You better have a really good reason. Never mind that survey and experimental evidence shows large differences in how people form expectations, or the questionable assumption that everybody in the model knows the model and takes it as truth, or that the founder of rational expectations John Muth later said:

It is a little surprising that serious alternatives to rational expectations have never been proposed. My original paper was largely a reaction against very naive expectations hypotheses juxtaposed with highly rational decision-making behavior and seems to have been rather widely misinterpreted.
Quoted in Hoover (2013) – Rational Expectations: Retrospect and Prospect

Using an incredibly strong assumption like rational expectations is accepted without question. Any deviation requires explanation. Why? As far as I can tell, it’s just Kevin Malone economics: that’s the way it has always been done and nobody has any incentive to change it. And so researchers all get funneled into making tiny changes to existing frameworks – anything truly new is actively discouraged.

Now, of course, there are reasons to be wary of change. It would be a waste to completely ignore the path that brought us to this point and, despite their flaws, I’m sure there have been some important insights generated by the DSGE research program (although to be honest I’m having trouble thinking of any). Maybe it really is the best we can do. But wouldn’t it at least be worth devoting some time to alternatives? I would estimate that 99.99% of theoretical business cycle papers published in top 5 journals are based around DSGE models. Chari wasn’t exaggerating when he said there is no other game in town.

Wouldn’t it be nice if there were? Is there any reason other than tradition that every single macroeconomics paper needs to follow the exact same structure, to draw from the same set restrictive set of assumptions? If even 10% of the time and resources devoted to tweaking existing models was instead spent on coming up with entirely new ways of doing macroeconomic research I have no doubt that a viable alternative could be found. And there already are some (again, more on this later), but they exist only on the fringes of the profession. I think it’s about time for that to change.

What’s Wrong With Modern Macro? Part 12 Models and Theories

Part 12 in a series of posts on modern macroeconomics. This post draws inspiration from Axel Leijonhufvud’s distinction between models and theories in order to argue that the current procedure of performing macroeconomic research is flawed.


In a short 1997 article, Axel Leijonhufvud gives an interesting argument that economists too often fail to differentiate between models and theories. In his words,

For many years now, ‘model’ and ‘theory’ have been widely used as interchangeable terms in the profession. I want to plead the case that there is a useful distinction to be made. I propose to conceive of economic ‘theories’ as sets of beliefs about the economy and how it functions. They refer to the ‘real world’ – that curious expression that economists use when it occurs to them that there is one. ‘Models’ are formal but partial representations of theories. A model never encompasses the entire theory to which it refers.
Leijonhufvud (1997) – Models and Theories

The whole article is worth reading and it highlights a major problem with macroeconomic research: we don’t have a good way to determine when a theory is wrong. Standard economic theory has placed a large emphasis on models. Focusing on mathematical models ensures that assumptions are laid out clearly and that the implications of those assumptions are internally consistent. However, this discipline also constrains models to be relatively simple. I might have some grand theory of how the economy works, but even if my theory is entirely accurate, there is little chance that it can be converted into a tractable set of mathematical equations.

The result of this discrepancy between model and theory makes it very difficult to judge the success or failure of a theory based on the success or failure of a model. As discussed in an earlier post, we begin from the premise that all models are wrong and therefore it shouldn’t be at all surprising when a model fails to match some feature of reality. When a model is disproven along some margin, there are two paths a researcher can take: reject the theory or modify the assumptions of the model to better fit the theory. Again quoting Leijonhufvud,

When a model fails, the conclusion nearest to hand is that some simplifying assumption or choice of empirical proxy-variable is to blame, not that one’s beliefs about the way the world works are wrong. So one looks around for a modified model that will be a better vehicle for the same theory.
Leijonhufvud (1997) – Models and Theories

A good example of this phenomenon comes from looking at the history of Keynesian theory over the last 50 years. When the Lucas critique essentially dismantled the Keynesian research program in the 1980s, economists who believed in Keynesian insights could have rejected that theory of the world. Instead, they adopted neoclassical models but tweaked them in such a way that produced results that were still consistent with Keynesian theory. New Keynesian models today include rational expectations and microfoundations. Many hours have been spent converting Keynes’s theory into fancy mathematical language, adding (sometimes questionable) frictions into the neoclassical framework to coax it into giving Keynesian results. But how does that help us understand the world? Have any of these advances increased our understanding of economics beyond what Keynes already knew 80 years ago? I have my doubts.

Macroeconomic research today works on the assumption that any good theory can be written as a mathematical model. Within that class of theories, the gold standard are those that are tractable enough to give analytical solutions. Restricting the set of acceptable models in this way probably helps throw out a lot of bad theories. What I worry is that it also precludes many good ones. By imposing such tight constraints on what a good macroeconomic theory is supposed to look like we also severely constrain our vision of a theoretical economy. Almost every macroeconomic paper draws from a tiny selection of functional forms for preferences and production because any deviation quickly adds too much complexity and leads to systems of equations that are impossible to solve.

But besides restricting the set of theories, speaking in models also often makes it difficult to understand the broader logic that drove the creation of the model. As Leijonhufvud puts it, “formalism in economics makes it possible to know precisely what someone is saying. But it often leaves us in doubt about what exactly he or she is talking about.” Macroeconomics has become very good at laying out a set of assumptions and deriving the implications of those assumptions. Where it often fails is in describing how we can compare that simplified world to the complex one we live in. What is the theory that underlies the assumptions that create an economic model? Where do the model and the theory differ and how might that affect the results? These questions seem to me to be incredibly important for conducting sound economic research. They are almost never asked.

What’s Wrong With Modern Macro? Part 11 Building on a Broken Foundation

Part 11 in a series of posts on modern macroeconomics. The discussion to this point has focused on the early contributions to DSGE modeling by Kydland and Prescott. Obviously, in the last 30 years, economic research has advanced beyond their work. In this post I will argue that it has advanced in the wrong direction.


In Part 3 of this series I outlined the Real Business Cycle (RBC) model that began macroeconomics on its current path. The rest of the posts in this series have offered a variety of reasons why the RBC framework fails to provide an adequate model of the economy. An easy response to this criticism is that it was never intended to be a final answer to macroeconomic modeling. Even if Kydland and Prescott’s model misses important features of a real economy doesn’t it still provide a foundation to build upon? Since the 1980s, the primary agenda of macroeconomic research has been to build on this base, to relax some of the more unrealistic assumptions that drove the original RBC model and to add new assumptions to better match economic patterns seen in the data.

New Keynesian Models

Part of the appeal of the RBC model was its simplicity. Driving business cycles was a single shock: exogenous changes in TFP attributed to changes in technology. However, the real world is anything but simple. In many ways, the advances in the methods used by macroeconomists in the 1980s came only at the expense of a rich understanding of the complex interactions of a market economy. Compared to Keynes’s analysis 50 years earlier, the DSGE models of the 1980s seem a bit limited.

What Keynes lacked was clarity. 80 years after the publication of the General Theory, economists still debate “what Keynes really meant.” For all of their faults, DSGE models at the very least guarantee clear exposition (assuming you understand the math) and internal consistency. New Keynesian (NK) models attempt to combine these advantages with some of the insights of Keynes.

The simplest distinction between an RBC model and an NK model is the assumptions made about the flexibility of prices. In early DSGE models, prices (including wages) were assumed to be fully flexible, constantly adjusting in order to ensure that all markets cleared. The problem (or benefit depending on your point of view) with perfectly flexible prices is that it makes it difficult for policy to have any impact. Combining rational expectations with flexible prices means monetary shocks cannot have any real effects. Prices and wages simply adjust immediately to return to the same allocation of real output.

To reintroduce a role for policy, New Keynesians instead assume that prices are sticky. Now when a monetary shock hits the economy firms are unable to change their price so they respond to the higher demand by increasing real output. As research in the New Keynesian program developed, more and more was added to the simple RBC base. The first RBC model was entirely frictionless and included only one shock (to TFP). It could be summarized in just a few equations. A more recent macro model, The widely cited Smets and Wouters (2007), adds frictions like investment adjustment costs, variable capital utilization, habit formation, and inflation indexing in addition to 7 structural shocks, leading to dozens of equilibrium equations.

Building on a Broken Foundation

New Keynesian economists have their heart in the right place. It’s absolutely true that the RBC model produces results at odds with much of reality. Attempting to move the model closer to the data is certainly an admirable goal. But, as I have argued in the first 10 posts of this series, the RBC model fails not so much because it abstracts from the features we see in real economies, but because the assumptions it makes are not well suited to explain economic interactions in the first place.

New Keynesian models do almost nothing to address the issues I have raised. They still rely on aggregating firms and consumers into representative entities that offer only the illusion of microfoundations rather than truly deriving aggregate quantities from realistic human behavior. They still assume rational expectations, meaning every agent believes that the New Keynesian model holds and knows that every other agent also believes it holds when they make their decisions. And they still make quantitative policy proposals by assuming a planner that knows the entire economic environment.

One potential improvement of NK models over their RBC counterparts is less reliance on (potentially misspecified) technology shocks as the primary driver of business cycle fluctuations. The “improvement,” however, comes only by introducing other equally suspect shocks. A paper by V.V. Chari, Patrick Kehoe, and Ellen McGrattan argues that four of the “structural shocks” in the Smets-Wouters setup do not have a clear interpretation. In other words, although the shocks can help the model match the data, the reason why they are able to do so is not entirely clear. For example, one of the shocks in the model is a wage markup over the marginal productivity of a worker. The authors argue that this could be caused either by increased bargaining power (through unions, etc.), which would call for government intervention to break up unions, or through an increased preference for leisure, which is an efficient outcome and requires no policy. The Smets-Wouters model can say nothing about which interpretation is more accurate and is therefore unhelpful in prescribing policy to improve the economy.

Questionable Microfoundations

I mentioned above that the main goal of the New Keynesian model was to revisit the features Keynes talked about decades ago using modern methods. In today’s macro landscape, microfoundations have become essential. Keynes spoke in terms of broad aggregates, but without understanding how those aggregates are derived from optimizing behavior at a micro level they fall into the Lucas critique. But although the NK model attempts to put in more realistic microfoundations, many of its assumptions seem at odds with the ways in which individuals actually act.

Take the assumption of sticky prices. It’s probably a valid assumption to include in an economic model. With no frictions, an optimizing firm would want to change its price every time new economic data became available. Obviously that doesn’t happen as many goods stay at a constant price for months or years. So of course if we want to match the data we are going to need some reason for firms to hold prices constant. The most common way to achieve this result is called “Calvo Pricing” in honor of a paper by Guillermo Calvo. And how do we ensure that some prices in the economy remain sticky? We simply assign each firm some probability that they will be unable to change their price. In other words, we assume exactly the result we want to get.

Of course, economists have recognized that Calvo pricing isn’t really a satisfactory final answer, so they have worked to microfound Calvo’s idea through an optimization problem. The most common method, proposed by Rotemberg in 1982, is called the menu cost model. In this model of price setting, a firm must pay some cost every time it changes its prices. This cost ensures that a firm will only change its price when it is especially profitable to do so. Small price changes every day no longer make sense. The term menu cost comes from the example of a firm needing to print out new menus every time it changes one of its prices, but it can be applied more broadly to encompass every cost a firm might incur by changing a price. More specifically, price changes could have an adverse effect on consumers, causing some to leave the store. It could require more advanced computer systems, more complex interactions with suppliers, competitive games between firms.

But by wrapping all of these into one “menu cost,” am I really describing a firm’s problem. A “true” microfoundation would explicitly model each of these features at a firm level with different speeds of price adjustment and different reasons causing firms to hold prices steady. What do we gain by adding optimization to our models if that optimization has little to do with the problem a real firm would face? Are we any better off with “fake” microfoundations than we were with statistical aggregates. I can’t help but think that the “microfoundations” in modern macro models are simply fancy ways of finagling optimization problems until we get the result we want. Interesting mathematical exercise? Maybe. Improving our understanding of the economy? Unclear.

Other additions to the NK model that purport to move the model closer to reality are also suspect. As Olivier Blanchard describes in his 2008 essay, “The State of Macro:”

Reconciling the theory with the data has led to a lot of un- convincing reverse engineering. External habit formation—that is a specification of utility where utility depends not on consumption, but on consumption relative to lagged aggregate consumption—has been introduced to explain the slow adjustment of consumption. Convex costs of changing investment, rather than the more standard and more plausible convex costs of investment, have been introduced to explain the rich dynamics of investment. Backward indexation of prices, an assumption which, as far as I know, is simply factually wrong, has been introduced to explain the dynamics of inflation. And, because, once they are introduced, these assumptions can then be blamed on others, they have often become standard, passed on from model to model with little discussion

In other words, we want to move the model closer to the data, but we do so by offering features that bear little resemblance to actual human behavior. And if the models we write do not actually describe individual behavior at a micro level, can we really still call them “microfounded”?

And They Still Don’t Match the Data

In Part 10 of this series, I discussed the idea that we might not want our models to match the data. That view is not shared by most of the rest of the profession, however, so let’s judge these models on their own terms. Maybe the microfoundations are unrealistic and the shocks misspecified, but, as Friedman argued, who cares about assumptions as long as we get a decent result? New Keynesian models also fail in this regard.

Forecasting in general is very difficult for any economic model, but one would expect that after 40 years of toying with these models, getting closer and closer to an accurate explanation of macroeconomic phenomena, we would be able to make somewhat decent predictions. What good is explaining the past if it can’t help inform our understanding of the future? Here’s what the forecasts of GDP growth look like using a fully featured Smets-Wouters NK model

Source: Gurkaynak et al (2013)

Forecast horizons are quarterly and we want to focus on the DSGE prediction (light blue) against the data (red). One quarter ahead, the model actually does a decent job (probably because of the persistence of growth over time), but its predictive power is quickly lost as we expand the horizon. The table below shows how poor this fit actually is.

Source: Edge and Gurkaynak (2010)

The R^2 values, which represent the amount of variation in the data accounted for by the model are tiny (a value of 1 would be a perfect match to the data). Even more concerning, the authors of the paper compare a DSGE forecast to other forecasting methods and find it cannot improve on a reduced form VAR forecast and is barely an improvement over a random walk forecast that assumes that all fluctuations are completely unpredictable. Forty years of progress in the DSGE research program and the best we can do is add unrealistic frictions and make poor predictions. In the essay linked above, Blanchard concludes “the state of macro is good.” I can’t imagine what makes him believe that.

Study Finds That Nobody Changes Their Mind After Reading Fake News

You'll Never Believe What Trump Said About It

The title, in case you didn’t already guess, is fake news. There was no study. But think about your reaction when you read it. Raise your hand if you said “wait a minute, I always thought fake news was a huge deal but I guess this study proved me wrong. I’ll just change my mind without thinking about it at all.” Anyone? Yeah I didn’t think so. And if you aren’t convinced by a headline on a reputable publication such as this one (OK maybe not so much), are you really buying the fake headlines that the Pope backed Trump or that Hillary actually didn’t win the popular vote?

Recently there has been an uproar surrounding these fake headlines. Germany wants Facebook to pay $500,000 for every fake news story that shows up. California (of course) wants to pass a law that will make sure every high school teaches its students how to spot fake news stories. I wish those stories were themselves fake news, but they appear to be all too real.

Now there probably are some people who do read these fake headlines and don’t do their research. Maybe they’ll store it somewhere in the back of their mind and use it as evidence to support their positions in debates with their friends. But I suspect that the only people who believe a fake headline are ones who were already inclined to believe it before they read it. No study has been done, but I’ll make the claim anyway: Nobody changes their mind because of fake news.

(One qualification to the above point is that it may break down if real news were censored. Here I am thinking about a case where the government restricts the media so that propaganda becomes the only source of information. Obviously that would be a major problem)

Perhaps more concerning is that people also don’t seem to change their mind because of real news either. They don’t let the facts guide their positions, but instead seek out the facts that support the positions they already held. Is believing a fake news story any worse than only believing the stories that confirm your preconceived inclinations?

In other words, the problem is not fake news. The problem is confirmation bias. Everyone’s guilty of it. I certainly am. How could you not be? With the internet at your fingertips, evidence supporting nearly any argument is freely available. And I don’t just mean op-eds or random blog posts. Even finding academic research to support almost anything has become incredibly easy.

Let’s say you want to take a stand on whether the government should provide stimulus to get out of a recession. Is government spending an effective way to restore growth? You want to let the facts guide you so you turn to the empirical literature. Maybe you start by looking at the work of Robert Barro, a Harvard scholar who has dedicated a significant portion of his research to the size of the fiscal multiplier. Based on his findings, he has argued that using government spending to combat a recession is “voodoo economics.” But then you see that Christina Romer, an equally respected economist, is much more optimistic about the effects of government spending. And then you realize that you could pick just about any number for the spending multiplier and find some paper that supports it.

So you’re left with two options. You can either spend a lifetime digging into these dense academic papers, learning the methods they use, weighing the pros and cons of each of their empirical strategies, and coming to a well-reasoned conclusion about which seems the most likely to be accurate. Or you can fall back on ideology. If you’re conservative, you share Barro’s findings all over your Facebook feed. Your conservative friends see the headline and think “I knew it all along, those Obama deficits were no good,” while the liberals come along and say, “You believe Barro? His findings have been debunked. The stimulus saved the economy.” And your noble fact finding mission ends in people digging in their heels even further.

That’s just one small topic in one field. There’s simply no way to have a qualified, fact-driven opinion on every topic. To take a position, you need to have a frame to view the world through. You need to be biased. And this reality means that it takes very little to convince us of things that we already want to believe. Changing your mind, even in the face of what could be considered contradictory evidence, becomes incredibly hard.

I don’t have a solution, but I do have a suggestion. Stop pretending to be so smart. On every issue, no matter what you believe, you’re very likely to either be on the wrong side or have a bad argument for being on the right side. What do the facts say, you ask? It would only be a slight exaggeration to say that they can show pretty much anything you want. I’ve spent most of my time the last 5 or so years trying to learn economics. Above all else, I’ve learned two things in that time. The first is that I’m pretty confident I have no idea how the economy works. The second is something I am even more confident about: you don’t know how it works either.

Please Don’t Audit the Fed

Source: Wikimedia Commons

Rand Paul, following in his father’s footsteps, has re-introduced a plan to audit the Fed. Trump supports the plan and even Bernie Sanders voted for it the last time the bill was put up before congress. I have no idea why. There might be an argument for ending the Fed entirely. Larry White gives a good summary of the argument in this video. I’ve written about the American Free Banking System, which worked reasonably well in the absence of a central bank (although the evidence is mixed). And maybe ending the Fed is the ultimate goal of Fed audit supporters and this bill is just a symbolic victory. But it really does essentially nothing useful and could potentially have detrimental effects.

In the press release linked above, Thomas Massie says “Behind closed doors, the Fed crafts monetary policy that will continue to devalue our currency, slow economic growth, and make life harder for the poor and middle class. It is time to force the Federal Reserve to operate by the same standards of transparency and accountability to the taxpayers that we should demand of all government agencies.” Even besides the fact that there is no serious economic analysis I know of that says the Fed makes life harder for the poor and middle class, this statement is complete nonsense.

Does the Fed need to be more transparent? I have a hard time seeing how it possibly could be. The Fed already posts on its website more information than anyone other than an academic economist could possibly want to know. Transcripts from their meetings, their economic forecasts, justifications for interest rate changes – it’s all there in broad daylight for anybody to read. As David Wessel points out in an excellent Q&A on auditing the Fed, the Government Accountability Office (GAO) already knows everything important about most of the assets held by the Fed.

The only possible change that could come as a result of auditing the Fed is more influence by congress over Federal Reserve decisions. That’s terrifying. Whatever your opinion of the Fed, it’s impossible to deny that it’s run by incredibly smart people who have dedicated their lives to understanding monetary policy. That doesn’t make them infallible. I’m all for taking power away from experts, for decentralizing and allowing markets to control money. But if we’re going to allow a group of individuals to decide the policy, at least let them be people who have some idea what they’re talking about. You might not love Janet Yellen, or Ben Bernanke, or Alan Greenspan, but I can’t imagine anyone would prefer monetary policy to be run by congress. Think about the arguments over raising the debt ceiling. Do we want that every time the Fed tries to make a decision? I certainly don’t.

I can absolutely criticize monetary policy. The Fed has come in below its 2% inflation target consistently for about 10 years now even though unemployment had been far from the natural rate. Maybe an NGDP target would be an improvement over the current dual mandate. And maybe we don’t need a central bank at all. I’m not opposed to monetary reform. But I can’t get behind a bill that only appears to make conducting monetary policy more political.

What’s Wrong With Modern Macro? Part 10 All Models are Wrong, Except When We Pretend They Are Right

Part 10 in a series of posts on modern macroeconomics. This post deals with a common defense of macroeconomic models that “all models are wrong.” In particular, I argue that attempts to match the model to the data contradict this principle.


In my discussion of rational expectations I talked about Milton Friedman’s defense of some of the more dubious assumptions of economic models. His argument was that even though real people probably don’t really think the way the agents in our economic models do, that doesn’t mean the model is a bad explanation of their behavior.

Friedman’s argument ties into a broader point about the realism of an economic model. We don’t necessarily want our model to perfectly match every feature that we see in reality. Modeling the complexity we see in real economies is a titanic task. In order to understand, we need to simplify.

Early DSGE models embraced this sentiment. Kydland and Prescott give a summary of their preferred procedure for doing macroeconomic economic research in a 1991 paper, “The Econometrics of the General Equilibrium Approach to Business Cycles.” They outline five steps. First, a clearly defined research question must be chosen. Given this question, the economist chooses a model economy that is well suited to answering it. Next, the parameters of the model are calibrated to fit based on some criteria (more on this below). Using the model, the researcher can conduct quantitative experiments. Finally, results are reported and discussed.

Throughout their discussion of this procedure, Kydland and Prescott are careful to emphasize that the true model will always remain out of reach, that “all model economies are abstractions and are by definition false.” And if all models are wrong, there is no reason we would expect any model to be able to match all features of the data. An implication of this fact is that we shouldn’t necessarily choose parameters of the model that give us the closest match between model and data. The alternative, which they offered alongside their introduction to the RBC framework, is calibration.

What is calibration? Unfortunately a precise definition doesn’t appear to exist and its use in academic papers has become somewhat slippery (and many times ends up meaning I chose these parameters because another paper used the same ones). But in reading Kydland and Prescott’s explanation, the essential idea is to find values for your parameters from data that is not directly related to the question at hand. In their words, “searching within some parametric class of economies for the one that best fits some set of aggregate time series makes little sense.” Instead, they attempt to choose parameters based on other data. They offer the example of the elasticity of substitution between labor and capital, or between labor and leisure. Both of these parameters can be obtained by looking at studies of human behavior. By measuring how individuals and firms respond to changes in real wages, we can get estimates for these values before looking at the results of the model.

The spirit of the calibration exercise is, I think, generally correct. I agree that economic models can never hope to capture the intricacies of a real economy and therefore if we don’t see some discrepancy between model and data we should be highly suspicious. Unfortunately, I also don’t see how calibration helps us to solve this problem. Two prominent econometricians, Lars Hansen and James Heckman, take issue with calibration. They argue

It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be “plugged into” a representative consumer model to produce an empirically concordant aggregate model…microeconomic technologies can look quite different from their aggregate counterparts.
Hansen and Heckman (1996) – The Empirical Foundations of Calibration

Although Kydland and Prescott and other proponents of the calibration approach wish to draw parameters from sources outside the model, it is impossible to entirely divorce the values from the model itself. The estimation of elasticity of substitution or labor’s share of income or any other parameter of the model is going to depend in large part on the lens through which these parameters are viewed and each model provides a different lens. To think that we can take an estimate from one environment and plug it into another would appear to be too strong an assumption.

There is also another more fundamental problem with calibration. Even if for some reason we believe we have the “correct” parameterization of our model, how can we judge its success? Thomas Sargent, in a 2005 interview, says that “after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.” Most people, when confronted with the realization that their model doesn’t match reality would conclude that their model was wrong. But of course we already knew that. Calibration was the solution, but what does it solve? If we don’t have a good way to test our theories, how can we separate the good from the bad? How can we ever trust the quantitative results of a model in which parameters are likely poorly estimated and which offers no criterion for which it can be rejected? Calibration seems only to claim to be an acceptable response to the idea that “all models are wrong” without actually providing a consistent framework for quantification.

But what is the alternative? Many macroeconomic papers now employ a mix of calibrated parameters and estimated parameters. A common strategy is to use econometric methods to search over all possible parameter values in order to find those that are most likely given the data. But in order to put any significance on the meaning of these parameters, we need to take our model as the truth or as close to the truth. If we estimate the parameters to match the data, we implicitly reject the “all models are wrong” philosophy, which I’m not sure is the right way forward.

And it gets worse. Since estimating a highly nonlinear macroeconomic system is almost always impossible given our current computational constraints, most macroeconomic papers instead linearize the model around a steady state. So even if the model is perfectly specified, we also need the additional assumption that we always stay close enough to this steady state that the linear approximation provides a decent representation of the true model. That assumption often fails. Christopher Carroll shows that a log linearized version of a simple model gives a misleading picture of consumer behavior when compared to the true nonlinear model and that even a second order approximation offers little improvement. This issue is especially pressing in a world where we know a clear nonlinearity is present in the form of the zero lower bound for nominal interest rates and once again we get “unpleasant properties” when we apply linear methods.

Anyone trying to do quantitative macroeconomics faces a rather unappealing choice. In order to make any statement about the magnitudes of economic shocks or policies, we need a method to discipline the numerical behavior of the model. If our ultimate goal is to explain the real world, it makes sense to want to use data to provide this discipline. And yet we know our model is wrong. We know that the data we see was not generated by anything close to the simple models that enable analytic (or numeric) tractability. We want to take the model seriously, but not too seriously. We want to use the data to discipline our model, but we don’t want to reject a model because it doesn’t fit all features of the data. How do we determine the right balance? I’m not sure there is a good answer.

Social Cooperation Is the free market argument in need of rebranding?

The standard free market analysis places the individual at its center. As Adam Smith famously noted in 1776, although they act in their own self interest, an individual in a free market is “led by an invisible hand to promote an end which was no part of his intention.” And it is generally argued that competition is the driving force behind the benefits of the market process. Entrepreneurs constantly search for new opportunities to make profit, and as a result they find more efficient ways to provide goods to consumers.

Phrased in this way, the free market argument tends to evoke images of Social Darwinism – the best rise to the top, and the weak are left behind. Competition implies a constant struggle between market participants to seek their own benefit at the expense of others. This vision often leads critics to argue that the free market ideal generates an uncaring society. If everybody acts only in their own self interest, there is no room for cooperative behavior that is essential for human interaction. Morality, emotion, personal connections – none of it matters. The free market places “profit over people.”

This critique stems from a wildly incorrect reading of the free market argument.

Go back to Adam Smith. At the center of his work is the idea of division of labor. A market economy thrives not because individuals work in isolation. Instead, it depends entirely on the relationships between individuals, focusing each person’s talents on an activity where they possess a comparative advantage.

Perhaps the best illustration of the role of cooperation in a market economy is Leonard Read’s famous essay I, Pencil (there is also an excellent video inspired by the essay). Read points out that no individual on their own knows how to make even something as simple as a pencil. The production process requires dozens of firms and hundreds of workers each performing specialized tasks with little knowledge of the final product. There is no planner describing how to make a pencil and yet through the actions of individuals as well as the interactions between individuals, the production process arises spontaneously. An individual acting alone would quickly fail in a market economy.

Ludwig von Mises’s famous treatise Human Action, a comprehensive analysis of the working of the free market system, was almost given a different titleSocial Cooperation. Although Mises dropped this alternate title, the theme that markets depend as much on cooperation between individuals as they do on individual action itself runs throughout the book. Mises notes:

Within the frame of social cooperation there can emerge between members of society feelings of sympathy and friendship and a sense of belonging together. These feelings are the source of man’s most delightful and most sublime experiences. They are the most precious adornment of life; they lift the animal species man to the heights of a really human existence.
Human Action p. 144

A free market, in Mises’s view, doesn’t destroy relationships between individuals, but instead fosters these feelings. Even if we take the idea of “Social Darwinism” seriously, even if we admit that all individuals are driven by the desire to fight for their own survival, that doesn’t lead us to a world of selfishness (in a narrow sense) because “the most adequate means of improving his  condition is social cooperation and the division of labor” (Human Action, p. 176).

But the argument that markets and morals are inconsistent faces an even deeper flaw. In a market economy we have a choice. Of course we can choose to think only of ourselves, to put money over family, to value material goods over relationships. But that has absolutely nothing to do with the free market itself. Nothing in the market argument says that I should only care about wealth. If you want to put other priorities first, nobody in a free market has any right to stop you (the catch is that you also don’t have any right to make other people pay you).

A free market doesn’t place any moral judgement on the actions of individuals. It is perfectly consistent with both a savage society where everybody fights for their narrow self interest and ignores others as well as a responsible one where we care for our fellow humans. It is up to each of us as individuals to choose to live our lives morally (but of course, this choice is only an illusion).

What is the alternative? The only clear alternative I can see is to use the state to try to impose your morals on others. By enacting laws that force people to behave morally, maybe we can create a more caring society.

Such a system seems doomed to have the opposite effect. Let’s say you believe that redistribution of wealth is important. Poor people aren’t poor because they didn’t work hard. They just had bad luck. It’s the responsibility of the rich to help these people out. I am sympathetic to this reasoning. However, by forcing people to give up their wealth through taxation, we change the equation from one of responsibility to one of coercion. Rather than giving to the poor out of some sense of moral duty, I give because I don’t want to go to jail. Is attempting to legislate morality in this way more likely to generate a caring society or a resentful one? Respect between classes, or class warfare?

The free market argument should not marry itself to the individual. It is true that all actions must at their core come from individual decisions, but the market only works through the relationships between individuals. Human Action is only half of the story. Social Cooperation is equally important. By obscuring this fact, defenders of markets concede too much. Emphasizing efficiency and the incredible material progress society has made since adopting a market system is fine, but we can’t ignore the moral argument. Morality can’t be imposed. It has to be a choice. And only a free society offers that choice. “Liberty is an opportunity for doing good, but this is so only when it is also an opportunity for doing wrong” (Hayek, The Constitution of Liberty, p. 142).

What’s Wrong With Modern Macro? Part 9 Carrying on the Torch of the Market Socialists

Part 9 in a series of posts on modern macroeconomics. This post lays out a more philosophical critique of common macroeconomic models by drawing a parallel between the standard neoclassical model and the idea of “market socialism” developed in the early 20th century. 


If an economist had access to all of the data in the economy, macroeconomics would be easy. Given an exact knowledge of every individual’s preferences, every resource available in the economy, and every available technology that could ever be invented to turn those resources into goods, the largest problem that would remain for macroeconomics is waiting for a fast enough computer to plug all of this information into. As Hayek pointed out so brilliantly 60 years ago, the reason we need markets at all is precisely because so much of this information is unknown.

Read any macroeconomics paper written in the last 30 years, and you’d be lucky to find any acknowledgement of this crucial problem. Take, for example, Kydland and Prescott’s 1982 paper that is widely seen as the beginning of the DSGE framework. The headings in the model section are Technology, Preferences, Information Structure, and Equilibrium. Since then, almost every paper has followed a similar structure. Define the exact environment that defines a market, and then equilibrium prices and allocations simply pop out as a result.

What’s wrong with this method of doing economics? To understand the issue, we need to take a step back to an earlier debate.

Mises’s Critique of Socialism

In 1922, Ludwig von Mises published a book called Socialism that remains one of the most comprehensive and effective critiques of socialism ever written. In it, he developed his famous “calculation” argument. Importantly, Mises’s argument did not depend on morality, as he freely admitted that “all rights derive from violence” (42). Neither did his argument depend on the incentives of social planners. “Even angels,” claims Mises, “could not form a socialist community” (451). Instead, Mises makes a far more powerful argument: socialism is practically impossible.

I can’t explain the argument any better than Mises himself, so here is a quote that makes his main point

Let us try to imagine the position of a socialist community. There will be hundreds and thousands of establishments in which work is going on. A minority of these will produce goods ready for use. The majority will produce capital goods and semi-manufactures. All these establishments will be closely connected. Each commodity produced will pass through a whole series of such establishments before it is ready for consumption. Yet in the incessant press of all these processes the economic administration will have no real sense of direction. It will have no means of ascertaining whether a given piece of work is really necessary, whether labour and material are not being wasted in completing it. How would it discover which of two processes was the more satisfactory? At best, it could compare the quantity of ultimate products. But only rarely could it compare the expenditure incurred in their production. It would know exactly—or it would imagine it knew—what it wanted to produce. It ought therefore to set about obtaining the desired results with the smallest possible expenditure. But to do this it would have to be able to make calculations. And such calculations must be calculations of value. They could not be merely “technical,” they could not be calculations of the objective use-value of goods and services; this is so obvious that it needs no further demonstration
Mises (1922) – Socialism p. 120

Prices are what allow calculation in a market economy. In a socialist economy, market prices cannot exist. Socialism therefore is doomed to fail regardless of the intentions or morality of the planners. Even if they want what is best for society, they will never be able to achieve it.

The Response to Mises: Market Socialism

Although Mises argument was effective, the socialists weren’t prepared to give up so easily. Instead, a new idea was offered that attempted to provide a method of implementing socialist planning without falling into the problems of calculation outlined by Mises. Led by Oskar Lange and Abba Lerner among others, the seemingly contradictory idea of “market socialism” was born.

Lange’s argument begins by conceding that Mises was right. Calculation is impossible without prices. However, he argues there is no reason why those prices have to come from a market. Economic theory has already demonstrated the process through which efficient markets work. In particular, prices are set equal to marginal cost. In a market, this condition arises naturally from competition. In a market socialist society, it would be imposed by a planner. In Lange’s view, not only would such a rule match a market economy in terms of efficiency, but it could even offer improvements by dealing with problems like monopoly where competition is unable to drive down prices.

We are left with one final problem, which is the pricing of higher order capital goods. Lange admits that these goods pose a more difficult problem, but insists that the problem could be solved in the same way the market solves it: trial and error. In other words, just as entrepreneurs adjust prices in response to supply and demand, so could social planners. When demand is greater than supply, increase prices and when supply is greater than demand, reduce them. At worst, Lange argues, this system is at least as good as a free market and at best it is far better since “the Central Planning Board has a much wider knowledge of what is going on in the whole economic system than any private entrepreneur can ever have” (Lange (1936) – On the Economic Theory of Socialism, Part One p. 67)

The Market Socialist Misunderstanding: Why do Markets Work?

Lange’s argument should be appealing to anybody that takes standard economic theory seriously. A Walrasian General Equilibrium gives us specific conditions under which an economy operates most efficiently. We talk about whether decentralized competition can lead to these conditions, but why do we even need to bother? We know the solution, why not just jump there directly?

But by taking the model seriously, we lose sight of the process that it is trying to represent. For example, in the model the task of a firm is simple. Perfect competition has already driven prices down to their efficient level and any deviation from this price will immediately fail. As Mises emphasized, however, the market forces that bring about this price have to come from the constant searching of an entrepreneur for new profit opportunities. He concedes that “it is quite easy to postulate a socialist economic order under stationary conditions (Socialism, 163).” Conversely, the real world is characterized by constantly shifting equilibrium conditions. The market socialist answer assumes that we know the equations that characterize a market. Mises argues that these equations can only come through the market process.

Hayek also made an essential contribution to the market socialism debate through his work on the role of the price system in coordinating market activity. For Hayek, prices are a tool used to gather pieces of knowledge dispersed among millions of individuals. An entrepreneur does not need to know the relative scarcities of various goods when they attempt to choose the most efficient production process. They only need to observe the price. In this way, Lange’s pricing strategy cannot hope to replicate the process of a dynamic economy. When prices are no longer set by market participants looking to achieve the best allocation of resources they lose almost all of their information content.

The final piece missing from Lange’s analysis is perhaps also the most important: profit and loss. In Lange’s depiction of a market economy, he seems to imagine one that is already close to an equilibrium. In particular, he assumes that the production structures in place are already the most efficient. Without this assumption, there is no way to determine the marginal cost and no way to use trial and error pricing effectively. Mises and Hayek instead view an economy as constantly moving towards an equilibrium but never reaching it. Entrepreneurs constantly search for both new products to sell and new methods to produce existing products more efficiently. Good ideas are rewarded with profits and bad ones driven out by losses. Without this mechanism, what incentive is there for anybody to challenge the existing economic structure? Lange never provides an answer.

For a longer discussion on the socialism debate, see a paper I wrote here.

Repeating the Same Mistakes

So what does any of this have to do with modern macroeconomics? Look back to the example I gave at the beginning of this post of Kydland and Prescott’s famous paper. Like the market socialists, their paper begins from the premise that we know all of the relevant information about the economic environment. We know the technologies available, we know people’s preferences, and we assume that the agents in the model know these features as well. The parallels between the arguments of Lange and modern macroeconomics are perhaps most clear when we consider the discussion of the “social planner” in many macroeconomic papers. A common exercise performed in many of these studies is to compare the solution of a “planner’s problem” to that of the outcome of a decentralized competitive market. And in these setups, the planner can always do at least as good as the market and usually better, so the door is opened for policy to improve market outcomes.

But because most macro models outline the economic environment so explicitly, competition in the sense described by Mises and Hayek has once again disappeared. The economy in a neoclassical model finds itself perpetually at its equilibrium state. Prices are found such that the market clears. Profits are eliminated by competition. Everybody’s plans are fulfilled (except for some exogenous shocks). No thought is given to process that led to that state.

Is there a problem with ignoring this process? It depends on the question we want to answer. If we believe that economies are quick to adjust to a new environment, then the process of adjusting to a new equilibrium becomes trivial and we need only compare results in different equilibria. If, however, we believe that the economic environment is constantly changing, then the adjustment process becomes the primary economic problem that we want to explain. Modern macro has heavily invested in answering the former question. The latter appears to me to be far more interesting and far more relevant. The current macroeconomic toolbox offers little room to allow us to explore these dynamics.