All ownership derives from occupation and violence. When we consider the natural components of goods, apart from the labour components they contain, and when we follow the legal title back, we must necessarily arrive at a point where this title originated in the appropriation of goods accessible to all. Before that we may encounter a forcible expropriation from a predecessor whose ownership we can in its turn trace to earlier appropriation or robbery. That all rights derive from violence, all ownership from appropriation or robbery, we may freely admit to those who oppose ownership on considerations of natural law. But this offers not the slightest proof that the abolition of ownership is necessary, advisable, or morally justified.
Ludwig von Mises, Socialism, 1922
Part 2 in a series of posts on modern macroeconomics. Part 1 covered Keynesian economics, which dominated macroeconomic thinking for around thirty years following World War II. This post will deal with the reasons for the demise of the Keynesian consensus and introduce some of the key components of modern macro.
The Death of Keynesian Economics
Although Keynes had contemporary critics (most notably Hayek, if you haven’t seen the Keynes vs Hayek rap videos, stop reading now and go watch them here and here), these criticisms generally remained outside of the mainstream. However, a more powerful challenger to Keynesian economics arose in the 1950s and 60s: Milton Friedman. Keynesian theory offered policymakers a set of tools that they could use to reduce unemployment. A key empirical and theoretical result was the existence of a “Phillips Curve,” which posited a tradeoff between unemployment and inflation. Keeping unemployment under control simply meant dealing with slightly higher levels of inflation.
Friedman (along with Edmund Phelps), argued that this tradeoff was an illusion. Instead, he developed the Natural Rate Hypothesis. In his own words, the natural rate of unemployment is
the level that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labour and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.
Milton Friedman, “The Role of Monetary Policy,” 1968
If you don’t speak economics, the above essentially means that the rate of unemployment is determined by fundamental economic factors. Governments and central banks cannot do anything to influence this natural rate. Trying to exploit the Phillips curve relationship by increasing inflation would fail because eventually workers would simply factor the inflation into their decisions and restore the previous rate of employment at a higher level of prices. In the long run, printing money could do nothing to improve unemployment.
Friedman’s theories couldn’t have come at a better time. In the 1970s, the United States experienced stagflation, high levels of both inflation and unemployment, an impossibility in the Keynesian models, but easily explained by Friedman’s theory. The first blow to Keynesian economics had been dealt. It would not survive the fight that was still to come.
The Lucas Critique
While Friedman may have provided the first blow, Robert Lucas landed the knockout punch on Keynesian economics. In a 1976 article, Lucas noted that standard economic models of the time assumed people were excessively naive. The rules that were supposed to describe their behavior were invariant to policy changes. Even when they knew about a policy in advance, they were unable to use the information to improve their forecasts, ensuring that those forecasts would be wrong. In Lucas’s words, they made “correctibly incorrect” forecasts.
The Lucas Critique was devastating for the Keynesian modeling paradigm of the time. By estimating the relationships between aggregate variables based on past data, these models could not hope to capture the changes in individual actions that occur when the economy changes. In reality, people form their own theories about the economy (however simple they may be) and use those theories to form expectations about the future. Keynesian models could not allow for this possibility.
At the heart of the problem with Keynesian models prior to the Lucas critique was their failure to incorporate individual decisions. In microeconomics, economic models almost always begin with a utility maximization problem for an individual or a profit maximization problem for a firm. Keynesian economics attempted to skip these features and jump straight to explaining output, inflation, and other aggregate features of the economy. The Lucas critique demonstrated that this strategy produced some dubious results.
The obvious answer to the Lucas critique was to try to explicitly build up a macroeconomic model from microeconomic fundamentals. Individual consumers and firms returned once again to macroeconomics. Part 3 will explore some specific microfounded models in a bit more detail. But first, I need to explain one other idea that is essential to Lucas’s analysis and to almost all of modern macroeconomics: rational expectations
Think about a very simple economic story where a firm has to decide how much of a good to produce before they learn the price (such a model is called a “cobweb model”). Clearly, in order to make this decision a firm needs to form expectations about what the price will be (if they expect a higher price they will want to produce more). In most models before the 1960s, firms were assumed to form these expectations by looking at the past (called adaptive expectations). The problem with this assumption is that it creates predictable forecast errors.
For example, assume that all firms simply believe that today’s price will be the same as yesterday. Then when the price yesterday was high, firms produce a lot, pushing down the price today. Then tomorrow, firms expect a low price so don’t produce very much, which pushes the price back up. A smart businessman would quickly realize that their expectations are always wrong and try to find a new way to predict future prices.
A similar analysis led John Muth to introduce the idea of rational expectations in a 1961 paper. Essentially, Muth argued that if economic theory predicted a difference between expectation and result, the people in the model should be able to do the same. As he describes, “if the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for ‘the insider’ to profit from the knowledge.” Since only the best firms would survive, in the end expectations will be “essentially the same as the predictions of the relevant economic theory.”
Lucas took Muth’s idea and applied it to microfounded macroeconomic models. Agents in these new models were aware of the structure of the economy and therefore could not be fooled by policy changes. When a new policy is announced, these agents could anticipate the economic changes that would occur and adjust their own behavior. Microfoundations and rational expectations formed the foundation of a new kind of macroeconomics. Part 3 will discuss the development of Dynamic Stochastic General Equilibrium (DSGE) models, which have dominated macroeconomics for the last 30 years (and then in part 4 I promise I will actually get to what’s wrong with modern macro).
“If you think that there has never been a better time to be alive — that humanity has never been safer, healthier, more prosperous or less unequal — then you’re in the minority. But that is what the evidence incontrovertibly shows.”
If all had to wait for better things until they could be provided for all, that day would in many instances never come. Even the poorest today owe their relative material well-being to the results of past inequality
F.A. Hayek, The Constitution of Liberty, p.98
Democrats want to expand the role of government. Republicans want to shrink it. At least, that’s what their rhetoric says. The story becomes a bit harder to believe when looking at government spending statistics. Consider two presidents as an example. President A increased spending by $357 billion over the first seven years of his presidency. Over the same period, President B increased spending by around half as much, $160 billion. Who is President A? The legendary champion of small government, Ronald Reagan. President B? None other than the evil Kenyan dictator, Barack Obama himself.
Let’s look at a graph of total spending per capita over time (data from the BEA). I don’t see any clear party breaks. The one big slowdown in spending in the 1990s coincides with Clinton (a Democrat).
Breaking the data down by president makes the point even clearer. The table below shows how much spending per capita increased over each presidents tenure.
Note that Obama’s numbers are skewed by stimulus spending. Measuring from 2009 Q2 drops the increase to $153.
Overall, Republicans have held office for 36 years since 1953 and increased government spending per capita by $5310 during that time ($148 per year on average). Democrats were in power for 27 years and increased spending per capita by $3167 ($117 per capita). Not a single president in either party has actually reduced the size of government by this measure. A natural question is whether control of the senate or house is more important than the president. I didn’t calculate the numbers, but I doubt it would help the Republicans, who had control of the senate during the expansion in spending under both Reagan and Bush.
Bill McBride at Calculated Risk keeps a tally of public and private sector jobs added by president. By those numbers, Obama is the only president since Carter to decrease the total number of public sector jobs. Again, there is no clear relationship between party affiliation and number of jobs added.
On taxes, the picture looks strikingly different. Performing the same exercise shows that Republicans reduced taxes by $23 per capita per year, while Democrats increased taxes by $278 per capita per year. So maybe you could argue that the Republicans have kept half of their small government promise. But both parties clearly like to spend. At least the Democrats seem to care about paying for it.
It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.
Alfred North Whitehead, An Introduction to Mathematics, 1911
Part 1 in a series of posts on modern macroeconomics. This post focuses on Keynesian economics in order to set the stage for my explanation of modern macro, which will begin in part 2.
If you’ve never taken a macroeconomics class, you almost certainly have no idea what macroeconomists do. Even if you have an undergraduate degree in economics, your odds of understanding modern macro probably don’t improve much (they didn’t for me at least. I had no idea what I was getting into when I entered grad school). The gap between what is taught in undergraduate macroeconomics classes and the research that is actually done by professional macroeconomists is perhaps larger than in any other field. Therefore, for those of you who made the excellent choice not to subject yourself to the horrors of a first year graduate macroeconomics sequence, I will attempt to explain in plain English (as much as possible), what modern macro is and why I think it could be better.
But before getting to modern macro itself, it is important to understand what came before. Keep in mind throughout these posts that the pretense of knowledge is quite strong here. For a much better exposition that is still somewhat readable for anyone with a basic economic background, Michael De Vroey has a comprehensive book on the history of macroeconomics. I’m working through it now and it’s very good. I highly recommend it to anyone who is interested in what I say in this series of posts.
Although Keynes was not the first to think about business cycles, unemployment, and other macroeconomic topics, it wouldn’t be too much of an exaggeration to say that macroeconomics as a field didn’t truly appear until Keynes published his General Theory in 1936. I admit I have not read the original book (but it’s on my list). My summary here will therefore be based on my undergraduate macro courses, which I think capture the spirit (but probably not the nuance) of Keynes.
Keynesian economics begins by breaking aggregate spending (GDP) into four pieces. Private spending consists of consumption (spending by households on goods and services) and investment (spending by firms on capital). Government spending on goods and services makes up the rest of domestic spending. Finally, net exports (exports minus imports) is added to account for foreign expenditures. In a Keynesian equilibrium, spending is equal to income. Consumption is assumed to be a fraction of total income, which means that any increase in spending (like an increase in government spending) will cause an increase in consumption as well.
An important implication of this setup is that increases in spending increase total income by more than the initial increase (called the multiplier effect). Assume that the government decides to build a new road that costs $1 million. This increase in expenditure immediately increases GDP by $1 million, but it also adds $1 million to the income of the people involved in building the road. Let’s say that all of these people spend 3/4 of their income and save the rest. Then consumption also increases by $750,000, which then becomes other people’s incomes, adding another $562,500, and the process continues. Some algebra shows that the initial increase of $1 million leads to an increase in GDP of $4 million. Similar results occur if the initial change came from investment or changes in taxes.
The multiplier effect also works in the other direction. If businesses start to feel pessimistic about the future, they might cut back on investment. Their beliefs then become self-fulfilling as the reduction in investment causes a reduction in consumption and aggregate spending. Although the productive resources in the economy have not changed, output falls and some of these resources become underutilized. A recession occurs not because of a change in economic fundamentals, but because people’s perceptions changed for some unknown reason – Keynes’s famous “animal spirits.” Through this mechanism, workers may not be able to find a job even if they would be willing to work at the prevailing wage rate, a phenomenon known as involuntary unemployment. In most theories prior to Keynes, involuntary unemployment was impossible because the wage rate would simply adjust to clear the market.
Keynes’s theory also opened the door for government intervention in the economy. If investment falls and causes unemployment, the government can replace the lost spending by increasing its own expenditure. By increasing spending during recessions and decreasing it during booms, the government can theoretically smooth the business cycle.
The above description is Keynes at its most basic. I haven’t said anything about monetary policy or interest rates yet, but both of these were essential to Keynes’s analysis. Unfortunately, although The General Theory was a monumental achievement for its time and probably the most rigorous analysis of the economy that had been written, it is not exactly the most readable or even coherent theory. To capture Keynes’s ideas in a more tractable framework, J.R. Hicks and other economists developed the IS-LM model.
I don’t want to give a full derivation of the IS-LM model here, but the basic idea is to model the relationship between interest rates and income. The IS (Investment-Savings) curve plots all of the points where the goods market is in equilibrium. Here we assume that investment depends negatively on interest rates (if interest rates are high, firms would rather put their money in a bank then invest in new projects). A higher interest rate then lowers investment and decreases total income through the same multiplier effect outlined above. Therefore we end up with a negative relationship between interest rates and income.
The LM (Liquidity Preference-Money) curve plots all of the points where the money market is in equilibrium. Here we assume that the money supply is fixed. Money demand depends negatively on interest rates (since a higher interest rate means you would rather keep money in the bank than in your wallet) and positively on income (more cash is needed to buy stuff). Together these imply that a higher level of income results in a lower interest rate required to clear the money market. An equilibrium in the IS-LM model comes when both the money market and the goods market are in equilibrium (the point where the two lines cross).
The above probably doesn’t make much sense if you haven’t seen it before. All you really need to know is that an increase in government spending or investment shifts the IS curve right, which increases both income and interest rates. If the central bank increases the money supply, the LM curve shifts right, increasing income and decreasing interest rates. Policymakers then have two powerful options to combat economic downturns.
In the decades following Keynes and Hicks, the IS-LM model grew to include hundreds or thousands of equations that economists attempted to estimate econometrically, but the basic features remained in place. However, in the 1970s, the Keynesian model came under attack due to both empirical and theoretical failures. Part 2 will deal with these failures and the attempts to solve them.
I have a lot to say on this topic. Over the next couple weeks I’ll have a bunch of posts dealing with it.
Individual man is the product of a long line of zoological evolution which has shaped his physiological inheritance. He is born the offspring and the heir of his ancestors, and the precipitate and sediment of all that his forefathers experienced are his biological patrimony. When he is born, he does not enter the world in general as such, but a definite environment. The innate and inherited biological qualities and all that life has worked upon him make a man what he is at any instant of his pilgrimage. They are his fate and destiny. His will is not “free” in the metaphysical sense of this term. It is determined by his background and all the influences to which he himself and his ancestors were exposed.
Ludwig von Mises, Human Action
Note: Although this post is my longest so far, 1500 words is not nearly enough to form a full argument on a topic as big as free will. I’m also not a philosopher so this post is very much outside my area of expertise. I don’t expect the words here to convince anybody, but they are what I believe and I hope they give some idea of the philosophy that drives much of my thinking on economics and politics.
I used to believe in free will. In fact, I hardly considered the idea that there was any other possibility. Success, I thought, was built entirely on making good choices, failure on making poor ones. And perhaps it was because I had been relatively successful that I was so averse to believing those choices came from anything other than my own free will. I chose to work hard, chose to stay out of trouble, chose to be respectful to others. Those choices led to good grades, good relationships, and a good life. Most importantly, those choices were mine.
Nobody can say I’ve never changed my mind.
“I don’t believe in free will.” I don’t remember how the conversation got to that point, but I do remember my feeling of disbelief when I heard a friend say those words at dinner one night in the Umass dining hall. My initial thought was to throw my fork at him, to flip over my plate of food, to do something so unpredictable that he would be forced to admit that the only explanation was that I did it of my own free will. Of course, I didn’t do any of those things. I did spend the next hour arguing with him and several hours after that furiously googling to find evidence that free will was real. What I found instead was that Einstein believed “everything is determined, the beginning as well as the end, by forces over which we have no control” and that neuroscience has offered evidence that free will is nothing but an illusion. The debate over free will vs determinism is complex and has not been settled, but in reading the various arguments, I consistently found myself drawn more to those that opposed free will. It was time to admit I was wrong.
Think about anything you’ve ever done, any choice you’ve ever made. Now try to think about the reason that you made the decision in the way that you did. I can’t think of any action I’ve ever taken that wasn’t fully shaped by my past experiences. For me, free will means that if I could repeat the exact same situation twice, I would make a different choice each time. But why would that be? The choices I make are the ones I believe to be optimal (broadly defined) at any given point in time and what I believe to be optimal can only come from what I have learned in the past. I was lucky enough to be born into a stable family in the richest country in the world. If I were an orphan growing up in inner-city Boston, would I be where I am today? Unlikely. If I were born in Kenya? Not a chance. The opposite thought experiment is also revealing. If somebody else lived through my life in exactly the same way I have until this point, wouldn’t they just be me? Why would they make any choice differently from the ones I make?
Take away all the people you’ve met in your life. Take away all of the books you’ve ever read, all the movies you’ve ever seen, all the websites you’ve ever visited. Take away your genetics. You didn’t have control over any of those things, so they can’t be the result of your free will. But without them, what is left? Are you anything more than everything you have ever experienced? I’m not so sure.
Imagine that you could see your future, not only the one that actually occurs, but every possible future based on every possible action you could ever take. In this case, it should be clear that there is only one set of choices that make sense, the ones that lead to the best life (again defining best in the broadest possible way – I am not saying that people have to be “rational” in the usual economic sense of the word). Adding uncertainty complicates the problem, but it doesn’t change the basic principle. Every decision has to have a reason for being made, regardless of whether that reason is based on a complicated calculation, a simple heuristic, or pure instinct. If I knew everything about you, from your memories to your genetics, from your predictions of the future to your emotional state, I claim that I would have enough information to predict exactly what you would do next because I would be able to observe exactly the thought process that led you to make the decision.
Although determinism seems like a scary idea – your path is already set before you make your choice – the alternative is actually worse. There is only one way out I can see, one hope for “free will,” and that’s pure randomness. Again consider the situation where you live the same life in the exact same way up to a decision point. There are only two options. Either you make the same decision each time, which is then by definition a decision that could be predicted given full information, or you make a different decision given the same circumstances, which means there must be randomness involved. It can’t be randomness like flipping a coin – knowing the speed at which your thumb moves, the air resistance, the weight of the coin, and every other factor means the result is fully determined before the coin lands. Quantum mechanics appears to offer the possibility of true randomness, where only the probabilities of an event can ever be predicted, and some have used this phenomenon as an argument against determinism. But decisions being driven by pure randomness does not really support free will either, at least not in the way most people would think about it. In this version of the story, your path may not be fully determined, but now it’s completely random. Either way, your choices have nothing to do with you.
Now you might be a bit confused. The previous argument is starting to sound something like “you didn’t build that,” a rallying cry for democrats like Obama and Elizabeth Warren to justify taxing the rich. And there is some logic to that connection. If you aren’t the source of your own actions, why should you be responsible for their consequences? Sure, most rich people worked pretty hard to make their money, but even that hard work only came because they had good influences pushing them in the right direction. Other individuals and institutions (yes, even the government) deserve much of the credit for their success and it therefore seems natural to redistribute some of the rewards. So how do I reconcile determinism with libertarianism? Why is my blog dedicated to Hayek rather than Karl Marx?
It’s all about incentives.
If people’s choices are entirely determined by outside forces, then the institutions and laws that define the environment in which they live become especially important. Without free will, taxing the rich is no longer a question of fairness, but without the reward, why should an entrepreneur take the risks required to start a new business? Why should a manager put in the work required to make that business run well? Why should a worker do any more than the bare minimum to avoid getting fired? Conceding that all people had help in achieving success is not an argument against allowing them to keep all of the benefits of that success, because those benefits are themselves one of the most important factors in pushing them to make good decisions.
In other words, I’m not a libertarian because it’s right. I’m a libertarian because it works. Free will doesn’t exist, but acting as if it does ensures that incentives push people to act in ways most would consider desirable.
So why should we care? If we are better off pretending free will exists even if it doesn’t, why bother with this discussion at all? I think it has to do with perspective. Next time somebody does something you disagree with, remember that they come from a very different place than you do. They didn’t have your parents, your teachers, or your friends and therefore they could never have developed the same set of values you have. It doesn’t excuse immoral actions, but it does shift the blame. Rather than hating people that do bad things, rather than strive for vengeance, or for harsh punishments, the goal should always be to change the environment that caused those actions in the first place.
But more importantly, rejecting free will can give you a better perspective on your own life. Every person’s life will be filled with success and failure, and it’s easy to take these results personally, to admire your own achievements and blame yourself for your flaws. These feelings are important – they drive you to become a better person. But I’ve found that recognizing the influence others have had on your life can remind you to never get too high or low. When you succeed, stay humble. And when you fail, remember that it’s not your fault.