Some Thoughts on Universal Basic Income

People who oppose redistribution from the rich to the poor generally give two types of arguments against it. Perhaps the more obvious argument comes from a natural rights perspective – the person who created the wealth has a right to do whatever they want with it. However, if you don’t believe in free will (as I don’t), then this reasoning doesn’t make much sense. If you weren’t truly responsible for the circumstances that led you to create the wealth in the first place, why should you get to keep all of it? Shouldn’t some of it go to all of the people that had any influence on getting you to that position?

The stronger argument derives from incentives. Taking from the productive to give to the unproductive makes being unproductive far more attractive and we end up with a society where perfectly capable people choose not to work because they expect others to support them. Any attempt to solve poverty needs to deal with this issue, which makes designing anti-poverty measures difficult.

Our current welfare system has some checks in place that attempt to circumvent incentive problems, but it doesn’t solve them completely. Many of our current welfare programs involve cutoff levels where benefits begin to be reduced and eventually disappear altogether. This type of program introduces an implicit marginal tax on low income earners. Not only do they have to pay a higher official tax rate as their income rises, but they also lose some of the benefits they received at a lower income.

A simple way around this problem is to never phase out those benefits – to give them to everyone. This idea forms the backbone of the Universal Basic Income (UBI). One of the most complete proposals for a UBI comes from Charles Murray (yes, the same Charles Murray who gets kicked off college campuses because of his dangerous right wing ideas). In his version (laid out in his book In Our Hands), each American over the age of 21 would receive $13,000 per year in benefits unconditionally. Of this money, $3,000 has to be spent on health insurance, but the rest comes with no strings attached.

Of course, such a plan would be incredibly expensive. However, as Murray points out, our current system is already expensive. According to his calculations, if we eliminated our entire welfare system (including Social Security and Medicare), we could more than pay for the UBI. Getting rid of these programs would be difficult politically, but Murray offers several reasons why doing so would be desirable for almost everyone. Most notably, he estimates that poverty would be all but eliminated under his program vs the approximately 15% that remains under our own system.

Murray’s justification for some level of redistribution is similar to my own:

Inequality of wealth grounded in unequal abilities is different. For most of us, the luck of the draw cuts several ways: one person is not handsome, but is smart; another is not as smart, but is industrious; and still another is not as industrious, but is charming. This kind of inequality of human capital is enriching, making life more interesting for everyone. But some portion of the population gets the short end of the stick on several dimensions. As the number of dimensions grows, so does the punishment for being unlucky. When a society tries to redistribute the goods of life to compensate the most unlucky, its heart is in the right place, however badly the thing has worked out in practice
Charles Murray (2016) – In Our Hands

If we accept that some redistribution is desirable, a UBI seems like a more efficient way to carry it out than our current welfare setup. One common argument against the UBI is that it doesn’t make sense to waste resources on the rich. Bryan Caplan has given some arguments along these lines and argues that phasing benefits out gradually would avoid the implicit marginal tax rate problems without needing to give benefits to everyone. And it makes sense. If our goal is to eliminate poverty why not focus our efforts there?

But I don’t think that argument really works when you consider that a UBI is inextricably linked to the tax system. A UBI doesn’t look so universal after you consider that the rich are going to be paying for almost all of it. Everyone might get a check for $13,000, but top income earners pay far more than that in taxes. Their net benefit from government programs would still be strongly negative even after receiving the UBI. Depending on your perspective towards redistribution, this feature could actually be a negative, but given that redistribution is going to happen anyway, the UBI seems like a more efficient way of actually doing it.

It’s obviously not without fault, but I do think a UBI would be an improvement over our current system and I definitely recommend reading Murray’s book (it’s not that long) to anyone who wants to help the poor but believes we can do better than we do now.

 

Equality, Value, and Merit

A common argument against absolute equality is that individuals should be paid based on merit. Should somebody who works 80 hours a week earn the same amount as somebody who sits on their couch and watches TV all week? Even the most ardent supporter of redistribution would have a hard time answering yes. One of the alleged benefits of a free market economy is that it does a pretty good job allocating resources to those who work for them. Reading Hayek, however, I find it interesting that his defense of unequal outcomes explicitly denounces the idea of meritocracy. Value, not merit, is what should determine a person’s reward.

Some clarifications are in order. “Value” and “merit” are not well defined concepts. Let’s take an example to see the distinction between these two concepts. Imagine 2 students are studying for a math exam. One student studies 8 hours per day all week for the exam, but math has never been his strength and he ends up with a hard earned B+ on the exam. For the other student math has always come easy. He takes a quick look at his notes for a couple hours the night before and breezes through with an easy A. We might say that the first student deserves a higher grade than the second. If we graded based on merit we would want to give the higher grade to the student who worked the hardest. Of course, this grading system makes no sense when we consider that a grade is meant to represent a student’s knowledge of the material. Even though he didn’t work as hard, the second student knows math better and therefore deserves a higher grade.

The same arguments can be applied to an economic context. If two entrepreneurs each develop a product, a meritocratic society might suggest paying each based on how much work they each put into its creation. However, this criteria doesn’t consider the fact that consumers might place different values on the two products. If we want to maximize the benefits to society, we don’t actually care whether a product was created by a team of people and 2 years of strenuous research and development or by a guy coming up with ideas in the shower. All we care about is the value of the two products to the consumer. In Hayek’s words, “it is neither desirable nor practicable that material rewards should be made generally to correspond to what men recognize as merit…we do not wish people to earn a maximum of merit but to achieve a maximum of usefulness at a minimum of pain and sacrifice and therefore a minimum of merit” (The Constitution of Liberty, 157, 160).

It might seem unfair that talented people tend to earn more than the less talented. The handsome actor already gets good looks and fame. How is it fair that he also gets a big paycheck? And it’s not fair. But that doesn’t mean it’s not desirable. Because without that paycheck, without that incentive, maybe he wouldn’t have become an actor at all, and the opportunity to create a product that millions would have enjoyed is gone. It’s not fair that Tom Brady gets paid so much to play a game, but the only reason he does is because so many love watching him play. The alternative might not be that he gets paid less and still plays, but that he doesn’t play at all because his incentives to work hard and become a great player are diminished.

Another problem with a meritocratic society is that merit is hard to measure. Going back to the math example, I said that one student studied more than the other. But maybe his studying was not as efficient. Maybe he was actually on Facebook half the time, or didn’t focus on the right problems. And there are other factors. Maybe the second student paid better attention in class or had worked harder in previous classes and therefore didn’t need to work as hard now. Even if we wanted to reward the students’ merit, doing so would be a challenge. Similarly, looking at two products tells us little about how much work and how much effort went into the creation. What we can see is how much people like each product (by looking at how much they pay for it).

One of the greatest benefits of a market economy is that it pushes people towards the tasks that other people actually want them to do. In Hayek’s words, “If in their pursuit of uncertain goals people are to use their own knowledge and capacities, they must be guided, not by what other people think they ought to do, but by the value others attach to the result at which they aim” (The Constitution of Liberty, 159). By rewarding value over merit we ensure that people can only earn money by offering something that others desire. Everybody acts in their own self-interest, but the market usually ensures that that interest also aligns with the interests of others. Potential earnings act as a signal that shows what society values and attempts to regulate the market will almost certainly mess with these signals.

With this perspective, it is difficult to find a reason to care about others’ wealth. Steve Jobs, Bill Gates, and Mark Zuckerberg only got rich by offering a service that other people valued. Their contribution to society is likely far greater than any monetary compensation they received. Encouraging others to continue in their footsteps, to innovate and invent, is more important to the welfare of society as a whole than any attempts to redistribute their existing wealth. In fact, attempts to accomplish the latter discourage the former. I disagree with Ayn Rand on many points, but I think the overall theme in Atlas Shrugged is about right. When society feels like it can take anything it wants from the producers, they might decide that it’s simply not worth it any more, leaving no wealth left to redistribute at all.

 

What’s Wrong With Modern Macro? Part 14 A Pretense of Knowledge in Macroeconomics

Part 14 in a series of posts on modern macroeconomics. This post concludes my criticisms of the current state of macroeconomic research by tying all of my previous posts to Hayek’s warning about the pretense of knowledge in economics


Russ Roberts, host of the excellent EconTalk podcast, likes to say that you know macroeconomists have a sense of humor because they use decimal points. His implication is that for an economist to believe that they can predict growth or inflation or any other variable down to a tenth of a percentage point is absolutely ridiculous. And yet economists do it all the time. Whether it’s the CBO analysis of a policy change or the counterfactuals of an academic macroeconomics paper, quantitative analysis has become an almost necessary component of economic research. There’s nothing inherently wrong with this philosophy. Making accurate quantitative predictions would be an incredibly useful role for economics.

Except for one little problem. We’re absolutely terrible at it. Prediction in economics is really hard and we are nowhere near the level of understanding that would allow for our models to deliver accurate quantitative results. Everybody in the profession seems to realize this fact, which means nobody believes many of the results their theories produce.

Let me give two examples. The first, which I have already mentioned in previous posts, is Lucas’s result that the cost of business cycles in theoretical models is tiny – around one tenth of one percent of welfare loss compared to an economy with no fluctuations. Another is the well known puzzle in international economics that trade models have great difficulty producing large gains from trade. A seminal paper by Arkolakis, Costinot, and Rodriguez-Clare evaluates how our understanding of the gains from trade have improved from the theoretical advances of the last 10 years. Their conclusion: “so far, not much.”

How large are the costs of business cycles? Essentially zero. How big are the gains from trade? Too small to matter much. That’s what our theories tell us. Anybody who has lived in this world the last 30 years should be able to immediately take away some useful information from these results: the theories stink. We just lived through the devastation of a large recession. We’ve seen globalization lift millions out of poverty. Nobody believes for a second that business cycles and trade don’t matter. In fact, I’m not sure anybody takes the quantitative results of any macro paper seriously. For some reason we keep writing them anyway.

In the speech that provided the inspiration for the title of this blog, Hayek warned economists of the dangers of the pretense of knowledge. Unlike the physical sciences, where controlled laboratory experiments are possible and therefore quantitative predictions might be feasible, economics is a complex science. He argues,

The difficulties which we encounter in [complex sciences] are not, as one might at first suspect, difficulties about formulating theories for the explanation of the observed events – although they cause also special difficulties about testing proposed explanations and therefore about eliminating bad theories. They are due to the chief problem which arises when we apply our theories to any particular situation in the real world. A theory of essentially complex phenomena must refer to a large number of particular facts; and to derive a prediction from it, or to test it, we have to ascertain all these particular facts. Once we succeeded in this there should be no particular difficulty about deriving testable predictions – with the help of modern computers it should be easy enough to insert these data into the appropriate blanks of the theoretical formulae and to derive a prediction. The real difficulty, to the solution of which science has little to contribute, and which is sometimes indeed insoluble, consists in the ascertainment of the particular facts
Hayek (1989) – The Pretence of Knowledge

In other words, it’s not that developing models to explain economic phenomena is especially challenging, but rather that there is no way to collect sufficient information to apply any theory quantitatively. Preferences, expectations, technology. Any good macroeconomic theory would need to include each of these features, but each is almost impossible to measure.

Instead of admitting ignorance, we make assumptions. Preferences are all the same. Expectations are all rational. Production technologies take only a few inputs and outputs. Fluctuations are driven by a single abstract technology shock. Everyone recognizes that any realistic representation of these features would require knowledge far beyond what is available to any single mind. Hayek saw that this difficulty placed clear restrictions on what economists could do. We can admit that every model needs simplification while also remembering that those simplifications constrain the ability of the model to connect to reality. The default position of modern macroeconomics instead seems to be to pretend the constraints don’t exist.

Many economists would probably agree with many of the points I have made in this series, but it seems that most believe the issues have already been solved. There are a lot of models out there and some of them do attempt to deal directly with some of the problems I have identified. There are models that try to reduce the importance of TFP as a driver of business cycles. There are models that don’t use the HP-Filter, models that have heterogeneous agents, models that introduce financial frictions and other realistic features absent from the baseline models. For any flaw in one model, there is almost certainly another model that attempts to solve it. But in solving that single problem they likely introduce about ten more. Other papers will deal with those problems, but maybe they forget about the original problem. For each problem that arises, we just introduce a new model. And then we take those issues as solved even though they are solved by a set of models that potentially produce conflicting results and with no real way to differentiate which is more useful.

Almost every criticism I have written about in the last 13 posts of this series can be traced back to the same source. Macroeconomists try to do too much. They haven’t heeded Hayek’s plea for humility. Despite incredible simplifying assumptions, they take their models to real data and attempt to make predictions about a world that bears only a superficial resemblance to the model used to represent it. Trying to answer the big questions about macroeconomics with such a limited toolset is like trying to build a skyscraper with only a hammer.

Interesting Paper on Inequality and Fairness

As a followup to my recent post on inequality, I wanted to highlight some recent research by Christina Starmans, Mark Sheskin, and Paul Bloom on fairness and inequality. Based on a survey of lab experiments and evidence from the real world, the paper argues that people don’t actually care about unequal outcomes as long as they are perceived as fair.

They highlight several studies that show that in laboratory settings people (even children) are likely to distribute resources equally. However, in many of these settings, equality and fairness are indistinguishable. Since none of the participants did anything to deserve a larger portion, participants could simply be attempting to create a fair distribution rather than an equal one. And experiments that explicitly distinguish between fairness and equality do find that people care more about the former. For example, people were not unhappy with allocations that were determined randomly even if the outcome ended up being unequal as long as everybody began with an equal opportunity. Children who were asked to allocate erasers as a reward for cleaning their rooms were more likely to give the erasers to those who did a good job.

In reality people also seem to prefer an unequal distribution of income as long as it is perceived to be fair. In surveys, while people’s perception of the true income distribution is often highly skewed, their ideal distribution is not one of perfect equality. Of course, looking at these surveys does not necessarily tell us much about what the “best” income distribution would be, but rather the one people (think they) prefer. As I argued in my last post, I think too much weight has been placed on income or wealth inequality when really all that matters are differences in people’s happiness or utility. The evidence presented here does not go that far, but it does suggest that people realize that different behavior should lead to different rewards in some cases.

One reason that I think the debate has focused mostly on income or wealth inequality rather than on fairness or another measure of inequity is due to issues with measurement. Everybody has different ideas about what is fair so it’s easier to frame the question in terms of something that can be easily reported numerically. We may want to reconsider our acceptance of those statistics as a meaningful representation of a social problem. The whole paper is well worth reading and it opens up some interesting questions about human behavior. I will have at least one more post related to inequality coming in the next week or so.

What Kind of Inequality Matters?

Thomas Piketty’s book, Capital in the Twenty-First Century, a thorough analysis of the causes and effects of inequality, recently became an international best-seller. It’s not often that thousand page economic treatises attract popular attention, so clearly there’s something important to discuss here. Looking at some of the data on inequality, it’s not hard to see why many people are concerned. Here’s a chart showing the share of income held by the top 10% in the United States since 1910:

Notice where the two peaks occur – 1929 and 2009. I seem to recall something important happening in each of those years. Whether inequality was a symptom or a cause of the broader problems that led to the Great Depression and the Great Recession is an interesting question and definitely deserves scrutiny. For the purposes of this post, however, I want to address a simpler topic. Should we care about inequality on its own? And, more specifically, what kind of inequality should we care about?

For the first question, let’s do a simple thought experiment. You can choose to live in one of two societies. In Society A, everybody makes $50,000 per year no matter what their profession is. LeBron James and a janitor get paid the same amount. In Society B, average income is the same $50,000 per year, but it is now dispersed, so that some people earn less than average and some earn far more. Now assume that you are guaranteed to begin at average income (to avoid questions of risk aversion). Which society would you rather live in?

The answer to the hypothetical depends in part on whether you care about absolute or relative income. Does it matter if you are rich, or does it only matter if you are richer than others? In Society A everybody is on the same level, which might seem to be an appealing feature.

Except as soon as we start to think a bit harder, we realize that people in Society A aren’t equal at all. At least to some extent, differences in income do come from differences in effort. Some people work harder than others. Should people get paid the same regardless of effort?

Of course, this reasoning attacks a bit of a straw man. Hardly anybody would argue for full equality of income. But that doesn’t mean that there aren’t some situations where reducing income inequality could be helpful. I don’t believe in free will, which means that I think that where you are today is determined by circumstances you had no control over. But even with free will, it’s impossible to deny that some people are luckier than others. Some people are born into families with higher incomes or better connections. Some people are just smarter, or more talented. Two people can put in the same amount of effort and come out with wildly different outcomes. Isn’t there some justification for correcting these kinds of inequalities?

Now we need to bring in the second question: what kind of inequality matters? To this point, I have focused entirely on income inequality, but money is only as good as what you can buy with it. Somebody who earns $1,000,000 per year but saves $950,000 is no better off than someone who earns $50,000 per year (until they start spending those savings of course). We also need to consider a dynamic component to inequality. The chart above shows only a snapshot of inequality at one point in time, but there is large variation in earnings over a person’s lifetime. So a better measure of the kind of inequality that actually matters would be total lifetime consumption inequality (due to measurement difficulties, the question of whether consumption and income inequality move together is still under debate – see a nice survey here).

But we’re still not quite there. Why do we consume anything? Presumably because it makes us happy, or, in the words of an economist, because it gives us utility. Simply giving people more stuff might not actually help them at all unless it’s stuff they actually want. So shouldn’t we actually care about total lifetime utility? And as soon as we jump into the world of utility, the problem gets much more difficult.

Consider an extremely wealthy person. Incredibly talented and smart, he excelled in school, founded a business, and became one of the most successful CEOs in the world. He has a beautiful house, ten expensive cars, flat screen TVs, season tickets to the Patriots. He can buy anything you could ever want. Except he works all the time, hates his job, and has no time for his family or friends. Despite his money, despite his consumption, he is miserable.

Another individual earns far less. She isn’t poor, but she earns right around median income. She doesn’t have a luxurious life, but she can afford the basics. More importantly, she’s happy. She has a loving family, great friends, a job she likes. Would she be happier with more income? Probably. But she doesn’t need luxuries to live a good life.

How do we make this society more equal? Simply looking at income would suggest a transfer from the wealthy man to the average income woman. This transfer would of course reduce income inequality, but it would increase utility inequality. The woman is already pretty happy and the man is not. Taking money from him and giving it to her would only increase the happiness gap. Is this outcome desirable? I don’t think so.

Then maybe we should try to minimize utility inequality. But how? Taking money from the woman would probably reduce the woman’s utility and eventually it would be as low as the man’s, but giving it to the man would probably do little to increase the man’s utility unless he takes comfort in the fact that others are as miserable as he is. The woman’s happiness comes from pieces of her life that can’t be transferred to others. Despite being born with all the skills necessary to succeed, the man would likely view the woman as the more fortunate one.

In general, trying to equalize utility gives some strange implications. Let give a few more examples.

Two people work in the exact same job and get paid the same wage. Seems perfectly fair. But what if one of them enjoys working and the other hates it? In dollars per hour, they are equal. In utility per hour, one receives more than the other. Reducing utility inequality would require that people who enjoy their jobs be paid less for the same work.

Some people prefer living in cities while others would prefer to live in smaller towns. Houses in cities are usually much more expensive, which means to achieve the same utility, a city lover will have to pay far more. In this case, income equality greatly benefits people who hate cities. Utility equality would suggest transfers from people who love rural areas to those who love cities.

Consumption equality could also generate large utility inequality. If one person places a lot of meaning on material goods while another values other aspects of their life, they would need different levels of consumption in order to achieve the same utility. Should we give more to the materialist than the ascetic simply because giving to the latter wouldn’t help them anyway?

And even these examples ignore the largest problem with trying to achieve equality in utility – it’s difficult to measure and impossible to compare across individuals. I have trouble defining my own preferences and determining what makes me the happiest, I certainly don’t trust others to do that for me.

So utility equality is probably not an option even if it were desirable. But income equality almost certainly worsens the problem of utility inequality. The people who make a lot of money are much more likely to also be people who place a high value on money. Those who earn less are more likely to enjoy a simpler life. In fact, there is little evidence that the rich are any happier than the rest of us. Taking their money makes them even worse off while helping those who are already pretty happy despite their relatively low income. The happy get happier while the miserable get more miserable.

Notice that I have deliberately avoided using examples with truly poor people. I can certainly see an argument for redistributing income to the poorest. Nobody should have to live at subsistence levels if they are willing to work. But being concerned about poverty and being concerned about inequality are not the same. It is possible for a society to have zero poor people and still be incredibly unequal and also possible to be almost perfectly equal with everybody poor (as it was for most of the history of human existence).

Have we gotten any closer to answering the original question? What kind of inequality should we care about? If you’ve made it this far, it should be clear that there isn’t an easy answer. We often use the term “less fortunate” as a euphemism for poor people and that almost exclusively refers to poverty in a monetary context. We view income as if it came from a lottery and then aim to use redistribution to correct for discrepancies. Why is that? Aren’t people that can be happy despite low income really the most fortunate? Isn’t money just one of many factors that matter for a person’s happiness? And aren’t many of these other factors difficult to measure and even more difficult to redistribute?

If we answer yes to the above questions, reducing the kind of inequality we care about becomes a much harder task. Can we really correct the deeper inequalities that arise due to people’s preferences and talents – some of which will lead to higher incomes and some not? Or should we accept that inequality is an essential part of society, accept that treating everyone equally necessarily produces inequality in outcomes, that differences in wealth don’t necessarily lead to differences in happiness, and that correcting differences in happiness is almost impossible?

What’s Wrong With Modern Macro? Part 13 No Other Game in Town

Part 13 in a series of posts on modern macroeconomics. Previous posts in this series have pointed out many problems with DSGE models. This post aims to show that these problems have either been (in my opinion wrongly) dismissed or ignored by most of the profession. 


“If you have an interesting and coherent story to tell, you can tell it in a DSGE model. If you cannot, your story is incoherent.”

The above quote comes from a testimony given by prominent Minnesota macroeconomist V.V. Chari to the House of Representatives in 2010 as they attempted to determine how economists could have missed an event as large as the Great Recession. Chari argues that although macroeconomics does have room to improve, it has made substantial progress in the last 30 years and there is nothing fundamentally wrong with its current path.

Of course, not everybody has been so kind to macroeconomics. Even before the crisis, prominent macroeconomists had begun voicing some concerns about the current state of macroeconomic research. Here’s Robert Solow in 2003:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. (I mention Pigou to disabuse everyone of the notion that this is some specifically Keynesian thing.) Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual–and, to some extent, market–behavior. But some formalization of the connection was a good idea. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?
Solow (2003) – Dumb and Dumber in Macroeconomics

Olivier Blanchard in 2008:

There is, however, such a thing as too much convergence. To caricature, but only slightly: A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, firms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.

Such articles can be great, and the best ones indeed are. But, more often than not, they suffer from some of the flaws I just discussed in the context of DSGEs: Introduction of an additional ingredient in a benchmark model already loaded with questionable assumptions. And little or no independent validation for the added ingredient.
Olivier Blanchard (2008) – The State of Macro

And more recently, the vitriolic critique of Paul Romer:

Would you want your child to be treated by a doctor who is more committed to his friend the anti-vaxer and his other friend the homeopath than to medical science? If not, why should you expect that people who want answers will keep paying attention to economists after they learn that we are more committed to friends than facts.
Romer (2016) – The Trouble with Macroeconomics

These aren’t empty critiques by no-name graduate student bloggers. Robert Solow and Paul Romer are two of the biggest names in growth theory in the last 50 years. Olivier Blanchard was the chief economist at the IMF. One would expect that these criticisms would cause the profession to at least strongly consider a re-evaluation of its methods. Looking at the recent trends in the literature as well as my own experience doing macroeconomic research, it hasn’t. Not even close. Instead, it seems to have doubled down on the DSGE paradigm, falling much closer to Chari’s point of view that “there is no other game in town.”

But it’s even worse than that. Taken at its broadest, sticking to DSGE models is not too restrictive. Dynamic simply means that the models have a forward looking component, which is obviously an important feature to include in a macroeconomic model. Stochastic means there should be some randomness, which again is probably a useful feature (although I do think deterministic models can be helpful as well – more on this later). General equilibrium is a little harder to swallow, but it still provides a good measure of flexibility.

Even within the DSGE framework, however, straying too far from the accepted doctrines of macroeconomics is out of the question. Want to write a paper that deviates from rational expectations? You better have a really good reason. Never mind that survey and experimental evidence shows large differences in how people form expectations, or the questionable assumption that everybody in the model knows the model and takes it as truth, or that the founder of rational expectations John Muth later said:

It is a little surprising that serious alternatives to rational expectations have never been proposed. My original paper was largely a reaction against very naive expectations hypotheses juxtaposed with highly rational decision-making behavior and seems to have been rather widely misinterpreted.
Quoted in Hoover (2013) – Rational Expectations: Retrospect and Prospect

Using an incredibly strong assumption like rational expectations is accepted without question. Any deviation requires explanation. Why? As far as I can tell, it’s just Kevin Malone economics: that’s the way it has always been done and nobody has any incentive to change it. And so researchers all get funneled into making tiny changes to existing frameworks – anything truly new is actively discouraged.

Now, of course, there are reasons to be wary of change. It would be a waste to completely ignore the path that brought us to this point and, despite their flaws, I’m sure there have been some important insights generated by the DSGE research program (although to be honest I’m having trouble thinking of any). Maybe it really is the best we can do. But wouldn’t it at least be worth devoting some time to alternatives? I would estimate that 99.99% of theoretical business cycle papers published in top 5 journals are based around DSGE models. Chari wasn’t exaggerating when he said there is no other game in town.

Wouldn’t it be nice if there were? Is there any reason other than tradition that every single macroeconomics paper needs to follow the exact same structure, to draw from the same set restrictive set of assumptions? If even 10% of the time and resources devoted to tweaking existing models was instead spent on coming up with entirely new ways of doing macroeconomic research I have no doubt that a viable alternative could be found. And there already are some (again, more on this later), but they exist only on the fringes of the profession. I think it’s about time for that to change.

What’s Wrong With Modern Macro? Part 12 Models and Theories

Part 12 in a series of posts on modern macroeconomics. This post draws inspiration from Axel Leijonhufvud’s distinction between models and theories in order to argue that the current procedure of performing macroeconomic research is flawed.


In a short 1997 article, Axel Leijonhufvud gives an interesting argument that economists too often fail to differentiate between models and theories. In his words,

For many years now, ‘model’ and ‘theory’ have been widely used as interchangeable terms in the profession. I want to plead the case that there is a useful distinction to be made. I propose to conceive of economic ‘theories’ as sets of beliefs about the economy and how it functions. They refer to the ‘real world’ – that curious expression that economists use when it occurs to them that there is one. ‘Models’ are formal but partial representations of theories. A model never encompasses the entire theory to which it refers.
Leijonhufvud (1997) – Models and Theories

The whole article is worth reading and it highlights a major problem with macroeconomic research: we don’t have a good way to determine when a theory is wrong. Standard economic theory has placed a large emphasis on models. Focusing on mathematical models ensures that assumptions are laid out clearly and that the implications of those assumptions are internally consistent. However, this discipline also constrains models to be relatively simple. I might have some grand theory of how the economy works, but even if my theory is entirely accurate, there is little chance that it can be converted into a tractable set of mathematical equations.

The result of this discrepancy between model and theory makes it very difficult to judge the success or failure of a theory based on the success or failure of a model. As discussed in an earlier post, we begin from the premise that all models are wrong and therefore it shouldn’t be at all surprising when a model fails to match some feature of reality. When a model is disproven along some margin, there are two paths a researcher can take: reject the theory or modify the assumptions of the model to better fit the theory. Again quoting Leijonhufvud,

When a model fails, the conclusion nearest to hand is that some simplifying assumption or choice of empirical proxy-variable is to blame, not that one’s beliefs about the way the world works are wrong. So one looks around for a modified model that will be a better vehicle for the same theory.
Leijonhufvud (1997) – Models and Theories

A good example of this phenomenon comes from looking at the history of Keynesian theory over the last 50 years. When the Lucas critique essentially dismantled the Keynesian research program in the 1980s, economists who believed in Keynesian insights could have rejected that theory of the world. Instead, they adopted neoclassical models but tweaked them in such a way that produced results that were still consistent with Keynesian theory. New Keynesian models today include rational expectations and microfoundations. Many hours have been spent converting Keynes’s theory into fancy mathematical language, adding (sometimes questionable) frictions into the neoclassical framework to coax it into giving Keynesian results. But how does that help us understand the world? Have any of these advances increased our understanding of economics beyond what Keynes already knew 80 years ago? I have my doubts.

Macroeconomic research today works on the assumption that any good theory can be written as a mathematical model. Within that class of theories, the gold standard are those that are tractable enough to give analytical solutions. Restricting the set of acceptable models in this way probably helps throw out a lot of bad theories. What I worry is that it also precludes many good ones. By imposing such tight constraints on what a good macroeconomic theory is supposed to look like we also severely constrain our vision of a theoretical economy. Almost every macroeconomic paper draws from a tiny selection of functional forms for preferences and production because any deviation quickly adds too much complexity and leads to systems of equations that are impossible to solve.

But besides restricting the set of theories, speaking in models also often makes it difficult to understand the broader logic that drove the creation of the model. As Leijonhufvud puts it, “formalism in economics makes it possible to know precisely what someone is saying. But it often leaves us in doubt about what exactly he or she is talking about.” Macroeconomics has become very good at laying out a set of assumptions and deriving the implications of those assumptions. Where it often fails is in describing how we can compare that simplified world to the complex one we live in. What is the theory that underlies the assumptions that create an economic model? Where do the model and the theory differ and how might that affect the results? These questions seem to me to be incredibly important for conducting sound economic research. They are almost never asked.

What’s Wrong With Modern Macro? Part 11 Building on a Broken Foundation

Part 11 in a series of posts on modern macroeconomics. The discussion to this point has focused on the early contributions to DSGE modeling by Kydland and Prescott. Obviously, in the last 30 years, economic research has advanced beyond their work. In this post I will argue that it has advanced in the wrong direction.


In Part 3 of this series I outlined the Real Business Cycle (RBC) model that began macroeconomics on its current path. The rest of the posts in this series have offered a variety of reasons why the RBC framework fails to provide an adequate model of the economy. An easy response to this criticism is that it was never intended to be a final answer to macroeconomic modeling. Even if Kydland and Prescott’s model misses important features of a real economy doesn’t it still provide a foundation to build upon? Since the 1980s, the primary agenda of macroeconomic research has been to build on this base, to relax some of the more unrealistic assumptions that drove the original RBC model and to add new assumptions to better match economic patterns seen in the data.

New Keynesian Models

Part of the appeal of the RBC model was its simplicity. Driving business cycles was a single shock: exogenous changes in TFP attributed to changes in technology. However, the real world is anything but simple. In many ways, the advances in the methods used by macroeconomists in the 1980s came only at the expense of a rich understanding of the complex interactions of a market economy. Compared to Keynes’s analysis 50 years earlier, the DSGE models of the 1980s seem a bit limited.

What Keynes lacked was clarity. 80 years after the publication of the General Theory, economists still debate “what Keynes really meant.” For all of their faults, DSGE models at the very least guarantee clear exposition (assuming you understand the math) and internal consistency. New Keynesian (NK) models attempt to combine these advantages with some of the insights of Keynes.

The simplest distinction between an RBC model and an NK model is the assumptions made about the flexibility of prices. In early DSGE models, prices (including wages) were assumed to be fully flexible, constantly adjusting in order to ensure that all markets cleared. The problem (or benefit depending on your point of view) with perfectly flexible prices is that it makes it difficult for policy to have any impact. Combining rational expectations with flexible prices means monetary shocks cannot have any real effects. Prices and wages simply adjust immediately to return to the same allocation of real output.

To reintroduce a role for policy, New Keynesians instead assume that prices are sticky. Now when a monetary shock hits the economy firms are unable to change their price so they respond to the higher demand by increasing real output. As research in the New Keynesian program developed, more and more was added to the simple RBC base. The first RBC model was entirely frictionless and included only one shock (to TFP). It could be summarized in just a few equations. A more recent macro model, The widely cited Smets and Wouters (2007), adds frictions like investment adjustment costs, variable capital utilization, habit formation, and inflation indexing in addition to 7 structural shocks, leading to dozens of equilibrium equations.

Building on a Broken Foundation

New Keynesian economists have their heart in the right place. It’s absolutely true that the RBC model produces results at odds with much of reality. Attempting to move the model closer to the data is certainly an admirable goal. But, as I have argued in the first 10 posts of this series, the RBC model fails not so much because it abstracts from the features we see in real economies, but because the assumptions it makes are not well suited to explain economic interactions in the first place.

New Keynesian models do almost nothing to address the issues I have raised. They still rely on aggregating firms and consumers into representative entities that offer only the illusion of microfoundations rather than truly deriving aggregate quantities from realistic human behavior. They still assume rational expectations, meaning every agent believes that the New Keynesian model holds and knows that every other agent also believes it holds when they make their decisions. And they still make quantitative policy proposals by assuming a planner that knows the entire economic environment.

One potential improvement of NK models over their RBC counterparts is less reliance on (potentially misspecified) technology shocks as the primary driver of business cycle fluctuations. The “improvement,” however, comes only by introducing other equally suspect shocks. A paper by V.V. Chari, Patrick Kehoe, and Ellen McGrattan argues that four of the “structural shocks” in the Smets-Wouters setup do not have a clear interpretation. In other words, although the shocks can help the model match the data, the reason why they are able to do so is not entirely clear. For example, one of the shocks in the model is a wage markup over the marginal productivity of a worker. The authors argue that this could be caused either by increased bargaining power (through unions, etc.), which would call for government intervention to break up unions, or through an increased preference for leisure, which is an efficient outcome and requires no policy. The Smets-Wouters model can say nothing about which interpretation is more accurate and is therefore unhelpful in prescribing policy to improve the economy.

Questionable Microfoundations

I mentioned above that the main goal of the New Keynesian model was to revisit the features Keynes talked about decades ago using modern methods. In today’s macro landscape, microfoundations have become essential. Keynes spoke in terms of broad aggregates, but without understanding how those aggregates are derived from optimizing behavior at a micro level they fall into the Lucas critique. But although the NK model attempts to put in more realistic microfoundations, many of its assumptions seem at odds with the ways in which individuals actually act.

Take the assumption of sticky prices. It’s probably a valid assumption to include in an economic model. With no frictions, an optimizing firm would want to change its price every time new economic data became available. Obviously that doesn’t happen as many goods stay at a constant price for months or years. So of course if we want to match the data we are going to need some reason for firms to hold prices constant. The most common way to achieve this result is called “Calvo Pricing” in honor of a paper by Guillermo Calvo. And how do we ensure that some prices in the economy remain sticky? We simply assign each firm some probability that they will be unable to change their price. In other words, we assume exactly the result we want to get.

Of course, economists have recognized that Calvo pricing isn’t really a satisfactory final answer, so they have worked to microfound Calvo’s idea through an optimization problem. The most common method, proposed by Rotemberg in 1982, is called the menu cost model. In this model of price setting, a firm must pay some cost every time it changes its prices. This cost ensures that a firm will only change its price when it is especially profitable to do so. Small price changes every day no longer make sense. The term menu cost comes from the example of a firm needing to print out new menus every time it changes one of its prices, but it can be applied more broadly to encompass every cost a firm might incur by changing a price. More specifically, price changes could have an adverse effect on consumers, causing some to leave the store. It could require more advanced computer systems, more complex interactions with suppliers, competitive games between firms.

But by wrapping all of these into one “menu cost,” am I really describing a firm’s problem. A “true” microfoundation would explicitly model each of these features at a firm level with different speeds of price adjustment and different reasons causing firms to hold prices steady. What do we gain by adding optimization to our models if that optimization has little to do with the problem a real firm would face? Are we any better off with “fake” microfoundations than we were with statistical aggregates. I can’t help but think that the “microfoundations” in modern macro models are simply fancy ways of finagling optimization problems until we get the result we want. Interesting mathematical exercise? Maybe. Improving our understanding of the economy? Unclear.

Other additions to the NK model that purport to move the model closer to reality are also suspect. As Olivier Blanchard describes in his 2008 essay, “The State of Macro:”

Reconciling the theory with the data has led to a lot of un- convincing reverse engineering. External habit formation—that is a specification of utility where utility depends not on consumption, but on consumption relative to lagged aggregate consumption—has been introduced to explain the slow adjustment of consumption. Convex costs of changing investment, rather than the more standard and more plausible convex costs of investment, have been introduced to explain the rich dynamics of investment. Backward indexation of prices, an assumption which, as far as I know, is simply factually wrong, has been introduced to explain the dynamics of inflation. And, because, once they are introduced, these assumptions can then be blamed on others, they have often become standard, passed on from model to model with little discussion

In other words, we want to move the model closer to the data, but we do so by offering features that bear little resemblance to actual human behavior. And if the models we write do not actually describe individual behavior at a micro level, can we really still call them “microfounded”?

And They Still Don’t Match the Data

In Part 10 of this series, I discussed the idea that we might not want our models to match the data. That view is not shared by most of the rest of the profession, however, so let’s judge these models on their own terms. Maybe the microfoundations are unrealistic and the shocks misspecified, but, as Friedman argued, who cares about assumptions as long as we get a decent result? New Keynesian models also fail in this regard.

Forecasting in general is very difficult for any economic model, but one would expect that after 40 years of toying with these models, getting closer and closer to an accurate explanation of macroeconomic phenomena, we would be able to make somewhat decent predictions. What good is explaining the past if it can’t help inform our understanding of the future? Here’s what the forecasts of GDP growth look like using a fully featured Smets-Wouters NK model

Source: Gurkaynak et al (2013)

Forecast horizons are quarterly and we want to focus on the DSGE prediction (light blue) against the data (red). One quarter ahead, the model actually does a decent job (probably because of the persistence of growth over time), but its predictive power is quickly lost as we expand the horizon. The table below shows how poor this fit actually is.

Source: Edge and Gurkaynak (2010)

The R^2 values, which represent the amount of variation in the data accounted for by the model are tiny (a value of 1 would be a perfect match to the data). Even more concerning, the authors of the paper compare a DSGE forecast to other forecasting methods and find it cannot improve on a reduced form VAR forecast and is barely an improvement over a random walk forecast that assumes that all fluctuations are completely unpredictable. Forty years of progress in the DSGE research program and the best we can do is add unrealistic frictions and make poor predictions. In the essay linked above, Blanchard concludes “the state of macro is good.” I can’t imagine what makes him believe that.

Study Finds That Nobody Changes Their Mind After Reading Fake News

You'll Never Believe What Trump Said About It

The title, in case you didn’t already guess, is fake news. There was no study. But think about your reaction when you read it. Raise your hand if you said “wait a minute, I always thought fake news was a huge deal but I guess this study proved me wrong. I’ll just change my mind without thinking about it at all.” Anyone? Yeah I didn’t think so. And if you aren’t convinced by a headline on a reputable publication such as this one (OK maybe not so much), are you really buying the fake headlines that the Pope backed Trump or that Hillary actually didn’t win the popular vote?

Recently there has been an uproar surrounding these fake headlines. Germany wants Facebook to pay $500,000 for every fake news story that shows up. California (of course) wants to pass a law that will make sure every high school teaches its students how to spot fake news stories. I wish those stories were themselves fake news, but they appear to be all too real.

Now there probably are some people who do read these fake headlines and don’t do their research. Maybe they’ll store it somewhere in the back of their mind and use it as evidence to support their positions in debates with their friends. But I suspect that the only people who believe a fake headline are ones who were already inclined to believe it before they read it. No study has been done, but I’ll make the claim anyway: Nobody changes their mind because of fake news.

(One qualification to the above point is that it may break down if real news were censored. Here I am thinking about a case where the government restricts the media so that propaganda becomes the only source of information. Obviously that would be a major problem)

Perhaps more concerning is that people also don’t seem to change their mind because of real news either. They don’t let the facts guide their positions, but instead seek out the facts that support the positions they already held. Is believing a fake news story any worse than only believing the stories that confirm your preconceived inclinations?

In other words, the problem is not fake news. The problem is confirmation bias. Everyone’s guilty of it. I certainly am. How could you not be? With the internet at your fingertips, evidence supporting nearly any argument is freely available. And I don’t just mean op-eds or random blog posts. Even finding academic research to support almost anything has become incredibly easy.

Let’s say you want to take a stand on whether the government should provide stimulus to get out of a recession. Is government spending an effective way to restore growth? You want to let the facts guide you so you turn to the empirical literature. Maybe you start by looking at the work of Robert Barro, a Harvard scholar who has dedicated a significant portion of his research to the size of the fiscal multiplier. Based on his findings, he has argued that using government spending to combat a recession is “voodoo economics.” But then you see that Christina Romer, an equally respected economist, is much more optimistic about the effects of government spending. And then you realize that you could pick just about any number for the spending multiplier and find some paper that supports it.

So you’re left with two options. You can either spend a lifetime digging into these dense academic papers, learning the methods they use, weighing the pros and cons of each of their empirical strategies, and coming to a well-reasoned conclusion about which seems the most likely to be accurate. Or you can fall back on ideology. If you’re conservative, you share Barro’s findings all over your Facebook feed. Your conservative friends see the headline and think “I knew it all along, those Obama deficits were no good,” while the liberals come along and say, “You believe Barro? His findings have been debunked. The stimulus saved the economy.” And your noble fact finding mission ends in people digging in their heels even further.

That’s just one small topic in one field. There’s simply no way to have a qualified, fact-driven opinion on every topic. To take a position, you need to have a frame to view the world through. You need to be biased. And this reality means that it takes very little to convince us of things that we already want to believe. Changing your mind, even in the face of what could be considered contradictory evidence, becomes incredibly hard.

I don’t have a solution, but I do have a suggestion. Stop pretending to be so smart. On every issue, no matter what you believe, you’re very likely to either be on the wrong side or have a bad argument for being on the right side. What do the facts say, you ask? It would only be a slight exaggeration to say that they can show pretty much anything you want. I’ve spent most of my time the last 5 or so years trying to learn economics. Above all else, I’ve learned two things in that time. The first is that I’m pretty confident I have no idea how the economy works. The second is something I am even more confident about: you don’t know how it works either.

Please Don’t Audit the Fed

Source: Wikimedia Commons

Rand Paul, following in his father’s footsteps, has re-introduced a plan to audit the Fed. Trump supports the plan and even Bernie Sanders voted for it the last time the bill was put up before congress. I have no idea why. There might be an argument for ending the Fed entirely. Larry White gives a good summary of the argument in this video. I’ve written about the American Free Banking System, which worked reasonably well in the absence of a central bank (although the evidence is mixed). And maybe ending the Fed is the ultimate goal of Fed audit supporters and this bill is just a symbolic victory. But it really does essentially nothing useful and could potentially have detrimental effects.

In the press release linked above, Thomas Massie says “Behind closed doors, the Fed crafts monetary policy that will continue to devalue our currency, slow economic growth, and make life harder for the poor and middle class. It is time to force the Federal Reserve to operate by the same standards of transparency and accountability to the taxpayers that we should demand of all government agencies.” Even besides the fact that there is no serious economic analysis I know of that says the Fed makes life harder for the poor and middle class, this statement is complete nonsense.

Does the Fed need to be more transparent? I have a hard time seeing how it possibly could be. The Fed already posts on its website more information than anyone other than an academic economist could possibly want to know. Transcripts from their meetings, their economic forecasts, justifications for interest rate changes – it’s all there in broad daylight for anybody to read. As David Wessel points out in an excellent Q&A on auditing the Fed, the Government Accountability Office (GAO) already knows everything important about most of the assets held by the Fed.

The only possible change that could come as a result of auditing the Fed is more influence by congress over Federal Reserve decisions. That’s terrifying. Whatever your opinion of the Fed, it’s impossible to deny that it’s run by incredibly smart people who have dedicated their lives to understanding monetary policy. That doesn’t make them infallible. I’m all for taking power away from experts, for decentralizing and allowing markets to control money. But if we’re going to allow a group of individuals to decide the policy, at least let them be people who have some idea what they’re talking about. You might not love Janet Yellen, or Ben Bernanke, or Alan Greenspan, but I can’t imagine anyone would prefer monetary policy to be run by congress. Think about the arguments over raising the debt ceiling. Do we want that every time the Fed tries to make a decision? I certainly don’t.

I can absolutely criticize monetary policy. The Fed has come in below its 2% inflation target consistently for about 10 years now even though unemployment had been far from the natural rate. Maybe an NGDP target would be an improvement over the current dual mandate. And maybe we don’t need a central bank at all. I’m not opposed to monetary reform. But I can’t get behind a bill that only appears to make conducting monetary policy more political.