Book
Share
Chapter

Chapter 5. Intellectual Blinkers and Unexpected Spillovers

Author(s):
Tamim Bayoumi
Published Date:
October 2017
Share
  • ShareShare
Show Summary Details

The 2008–12 North Atlantic financial crisis was so destructive in part because it was so unexpected, and hence the recipients were unprepared to absorb the blow. In early 2008, most economists and policymakers still felt that the excesses becoming apparent in US subprime mortgages were a minor financial hiccup that would soon be overcome. A few months later, when the extent of the problems in US financial markets became apparent after the rescue of the investment bank Bear Stearns, European policymakers remained confident that the fallout would be limited to the United States. This erroneous belief that the two regions were not closely linked was tellingly illustrated when the European Central Bank (ECB) hiked rates in July 2008 on fears of rising inflation, consciously moving in the opposite direction to the easing by the US Federal Reserve on concerns over decelerating output. The ECB was in for a rude awakening.

This lack of preparation mattered as policymakers faced unexpected challenges, resulting in responses that were improvised, piecemeal, and included a good number of mistakes. In the United States, this included the fateful decision to allow the US investment bank Lehman Brothers to go bankrupt despite the fact that it stood outside of the procedures to limit the resulting financial fallout available to insolvent regulated banks. It was the chaos from the sudden closing of Lehman Brothers that led to a generalized market panic. In the Euro area, the lack of preparation for a crisis was evident in the rules that severely limited the support that could be given to troubled banks or governments. The absence of such support led to destructive cycles in which questions about the solvency of banks strained government finances even as questions about government finances put pressure on banks. The resulting upward cycle in borrowing costs almost destroyed the currency union.

Why did nearly all policymakers and economists miss the warning the signals of a future financial crisis?1 The answer lay in the growing belief that financial and economic shocks were limited in scope and temporary in nature, supported by a view that, even in the unlikely event that there was a large shock, astute policy responses would be able to contain the resulting economic fallout. Three separately derived beliefs—the efficient markets hypothesis, the great moderation, and benign neglect—intertwined to create the intellectual blind spots that led policymakers to the Panglossian belief that the unsustainable boom of the early 2000s was a new normal that reflected a permanent improvement in economic understanding, information, and policies.

The outcome was that central banks, finance ministries, and other key economic institutions became increasingly inward–looking. An outsized belief in the self-healing properties of the economy led policymakers to create a system that appeared efficient but was not robust. There is a reason that human beings have the failsafe of a second kidney despite the fact that one is sufficient most of the time. If one kidney fails, then the second one can function on its own. The intellectual overconfidence that preceded the North Atlantic crisis led to a fragile system that was incapable of responding effectively to the shock waves generated by the bankruptcy of a second-order US investment bank. The North Atlantic lacked a second kidney when the first one failed.

* * *

How The Efficient Markets Hypothesis undermined Bank Regulation

The efficient markets hypothesis is a powerful concept that caused policymakers to underestimate financial risks. Economic theories are valuable because they bring clarity to a complex world, allowing people to organize and make sense of incoming information. However, if they result in overly simplified mental structures then major mistakes can be made. The efficient markets hypothesis lulled economists and policymakers into thinking that financial markets were self-correcting and hence unlikely to be an independent source of major economic disruption despite a plethora of historical evidence to the contrary.2 It was the victory of a convenient theory over historical experience.

The efficient market hypothesis essentially states that financial prices aggregate all available information, and hence that there is no investment strategy that can consistently beat the market. Put differently, if it is known to investors that (say) shares in General Motors are undervalued and will go up soon, then investors will bid up the price immediately until it reaches this fair value. While the theory and its tests come in softer and harder forms, the idea is that no individual investor or policy institution can beat the wisdom of crowds. This has implications for short- and long-term behavior. The short-term prediction is that day-to-day movements in prices of stocks and bonds are unpredictable except to the extent that they reflect new information. Clearly, if General Motors announces profits that are higher than investors expect then the price of its shares will increase. However, it is equally likely that the profit announcement will disappoint and shares will go down. Hence, before the announcement, the direction of the share price is unpredictable—it is as likely to rise as to fall.

This is often called the random walk theory, as the same properties are exhibited by a random walk in which, while taking a step forward, a person—usually assumed to be drunk—is as likely to also lurch to the left as to the right. Because the left-right movement is unpredictable, the best guess as to where the drunk will be at the next step is straight ahead. While there are some modest deviations from the prediction that markets are as likely to surprise on the upside as on the downside, as a whole it stands up pretty well. For example, it is extremely difficult to find mutual funds that consistently beat the short-term returns seen in the overall market. Asset prices do indeed seem to be essentially unpredictable from day-to-day and month-to-month.

The more powerful and controversial element of the efficient markets hypothesis is the prediction that markets also accurately foretell asset values over longer periods. This implies that the market is essentially omnipotent, in that not only is the price of General Motors unpredictable tomorrow and next week but that it is unpredictable at any point in the future. There is no point in trying to outguess the market as prices reflect all of the available information about General Motors. The share price in the middle of a recession, when investors seem to have lost heart and General Motors appears cheaply valued on measures such as the ratio of the price of a share to profits, is actually an accurate assessment of the current situation. A more sophisticated version of this argument, used before the crisis by the US Federal Reserve, was that market valuations can on occasion deviate from their long-term fundamentals, but that such outliers are so difficult to identify that it is best to assume financial markets are always right.

The view that markets are good predictors of the future rests on the assumption that economists commonly make that the likelihood of different future paths for the economy can be calculated. In other words, that investors can assess what is likely to happen. By contrast, at least one well-known academic and policy maker—Mervyn King, the former governor of the Bank of England—has persuasively argued that in the face of “unknown unknowns” the future may not be calculable because of the uncertainty is fundamental and does not have well defined probabilities.3 For example, even fifteen year ago it may have been impossible to foresee the changes to our lives coming from smart phones, so that it would have been impossible to “predict” the future that actually transpired. In this world, rather than trying to calculate the most likely future, investors may rationally lock into (relatively) synchronized narratives about the path of the economy that can remain stable for long periods but can also change rapidly in the face of unanticipated events. Such narratives have many similarities to Keynes’s famous likening of the stock market to a game in which contestants try to guess which female image most other contestants will pick as the most attractive. From the point of view of the efficient markets hypothesis, the important implication is that investors cannot be omnipotent since it is (literally) impossible to accurately assess the future.

In the run-up to the crisis, the conviction that market prices were (almost) always right even in the long-run led to a gradual downgrading of the importance of financial risks and regulation. As financial markets became more sophisticated and complex, it became common to assume that they were also becoming better at assessing risks. This attitude led the US Federal Reserve to believe that investors were better at monitoring the risks being taken by investment banks than were government regulators. As Alan Greenspan stated in his memoirs (sent to press in mid-2007 just before the financial warning signs started to appear): “Markets have become too huge, complex, and fast-moving to be subject to twentieth-century regulation … oversight of these transactions is essentially by means of individual-market-participant counterparty surveillance.”4 This view led to the disastrous 1996 decision by the Basel Committee to allow banks to use their own internal risk models to calculate the capital buffers required to offset risks in investment banking. It was also a powerful factor in the decision by the United Kingdom and Ireland to adopt light–touch regulation. If financial market prices could not be usefully questioned, then regulation is not needed.

The mental block on the risks posed by financial markets was also a property of the economic models used by policymakers. Financial markets are messy, different across countries, and ever-changing in focus, making them unsuitable candidates for universal models. By contrast, the macro-economic models that were used to assess policies, while incorporating complex assumptions about the behavior of consumers and producers, almost uniformly exhibited extremely basic financial sectors which used simplifying assumptions such as that all financial assets were perfect substitutes for each other. They also embodied the efficient markets hypothesis, in that all changes in asset prices reflected “rational expectations” of the future.

The result was that policymakers failed to focus on the consequences of the huge expansion of financial activity and private debt over the 2000s. Financial regulation was focused on ensuring that individual institutions were sound (called micro-prudential regulation) with little regard for risks to the system as a whole. Easy financing conditions were seen as a result of changes in the behavior of households and producers, and the role of lax regulation in the expansion of bank lending was largely ignored. This lack of attention to financial regulation and risks associated with bank leverage also manifested in an uncritical belief in the benefits of free international capital flows, so that the rapid expansion of core European banks into the United States and the Euro area periphery was seen as a benign development. Over the North Atlantic crisis, the losses inflicted on poorly capitalized core European banks and their withdrawal from the US and Euro area periphery amplified and internationalized the initial shock. This important intellectual dynamic is discussed in more detail in the next chapter.

The strong belief in the efficient markets hypothesis by macroeconomics was particularly striking as it was out of step with the concurrent trends in financial economics. Within academic finance, models of asset pricing based only on efficient markets had been found wanting. Empirical work had increasingly added variables that reflected market inefficiencies, such as the observation that smaller and relatively less liquid firms attracted higher returns. More generally, skeptics of the efficient markets hypothesis such as Professor Robert Shiller of Yale University argued that consistently following simple strategies such as investing in firms with low share prices compared to their earnings could achieve long-term returns modestly above the average without taking on more risk (the trading philosophy used by Warren Buffett). Against this, however, was always the rejoinder that this could be a statistical fluke and that if such models worked then why was the proponent not rich?

The lack of attention to financial risks was also out of step with the historical role played by central banks. While the original central banks, for example the Bank of England and the Banque de France, had initially been set up to handle government finances, by the latter half of the nineteenth century it had become clear that there was also a need for the central bank to maintain financial stability. Walter Bagehot, the celebrated editor of The Economist magazine and father of modern central banking, wrote after the 1866 financial crisis in England that the Bank of England held, should hold, and should be responsible for holding “the sole banking reserve of the country”.5 In 1890, when Baring Brothers was threatened with collapse, the Bank of England organized a rescue in the form of a guarantee fund to which more than £17 million was pledged, mostly from other banks. This demonstrated the responsibility the Bank of England had come to feel for the stability of the banking system. The link to financial stability was even clearer for the Federal Reserve, which was formed in 1913 in large part as a response to the financial panic of 1907, whose widespread economic losses would have been worse but for a personal intervention by John Pierpont Morgan that stabilized markets. Indeed, it is symptomatic of the importance of financial instability in nineteenth-century economics that the common name for what is now called recessions was panics.

The role of the central bank in supporting financial markets was codified in the phrase “lender of last resort”. It gradually became common practice for central banks to lend at a penalty rate to private banks that were unable to secure funding elsewhere to provide breathing room to arrange more organized support. If needed, the central bank would act as a last port of call. In the pre-crisis boom, such facilities covered most of the financial sector in bank-dominated Europe. In the United States, by contrast, it only covered regulated banks that accepted federally insured deposits. Importantly, it did not cover the independent investment banks and rest of the shadow banking system that grew rapidly in the 1990s and 2000s. This explains why a financial institution of the size and centrality of Lehman Brothers was not able to use the bankruptcy proceedings for regulated banks that allowed insolvent institutions to function for a while in order to limit the financial shock waves from an unexpected cessation of activity. It was the sudden disruption in markets following the abrupt closure of Lehman Brothers that sparked a global market panic.

The belief in the efficient markets hypothesis also led to an under-estimation of the risks from the pre-crisis rapid economic and financial expansion in the Euro area periphery. Once the members of the Euro area had been agreed, the borrowing rates of the member governments converged downwards towards those of Germany. For example, at the time of the introduction of the Euro in January 1999, the Portuguese government was borrowing for ten years at an interest rate of 3.9 percent compared to 3.7 percent for the German government. Three years earlier, well after the Maastricht Treaty had been signed but before the members of the currency union were known, the gap had been over 3 percentage points. A similar process happened with Italy and Spain, as well as Greece when she joined the Euro area in 2001.

The lending boom that accompanied these dramatically lower borrowing costs was ascribed to economic convergence within the monetary union. Economic convergence is the process by which incomes in poorer countries catch up with those in richer ones. This occurs because poor countries have less capital—machines and the like—per worker, so that the benefit of adding an extra machine is greater in a poorer country than in a richer one. This higher return means that the poor country is able to attract more investment, which allows its income to gradually converge to that of the richer country. The general view at the time was that this process had been accelerated by the introduction of the Euro, as the single currency had made it easier and cheaper for less wealthy members of the monetary union to borrow from their richer brethren.

Careful analysts were aware of some of the oddities of the Euro area convergence process but viewed them as self-correcting.6 For example, the additional borrowing was often used to buy houses rather than machines, so that productivity was not improving in the periphery despite more borrowing. In addition, higher inflation in the periphery was eroding the competitiveness of the exports needed to service the costs of higher foreign debt. However, none of these warning signs were seen as particularly urgent. The Euro area was viewed as a success in which lower borrowing costs and deeper financial markets had allowed greater investment by the periphery that would generate gradual convergence. In fact, little or no fundamental convergence seems to have been achieved over the boom of the 2000s.

In sum, the efficient markets hypothesis reinforced the view that financial markets were largely self-regulating, blinding policymakers to the importance of careful financial market supervision. If asset prices were always correctly priced, then there was no need for such oversight. This point of view, for example, led the Federal Reserve to advocate using internal models to calculate risk buffers, to leave the US investment banks under the light supervision of the Securities and Exchange Commission, and to soft-pedal consumer protection for mortgage borrowers. More generally, the result was that policymakers on both sides of the Atlantic missed the growing financial and economic imbalances, including in the Euro area periphery. As a result, they had to scramble to provide adequate support to the financial sector when the crisis hit. Similar blinkers were leading to an underestimation of macroeconomic risks, but through a different process.

* * *

The Great Moderation and Overestimation of Monetary Policy’s Effectiveness

If the efficient markets hypothesis was a theory, the “great moderation” was an observation. More precisely, the phrase the great moderation came from the observation that the volatility of US output had fallen markedly after the mid-1980s.7 This shift is sufficiently large and rapid that it stands out clearly in a graph as growth becomes noticeably less jagged after 1985 (Figure 40). What was (and remains) so striking is the speed with which growth shifted from high- to low-volatility. The instability of 1960–85 suddenly switches to relative stability. Formal tests across a range of measures confirmed the impression that that there was a sudden improvement in economic stability. By contrast, there was no noticeable increase in growth, explaining why this observation is dubbed the “great moderation” rather than (say) “the great acceleration”. It was the inferences from the great moderation that led to policy makers to overestimate their ability to response to macroeconomic shocks.

Figure 40.US output growth stabilized after the mid-1980s.

Source: US National Account.

More stable output implied a smoother business cycle, the regular process by which extended periods of economic expansion are interrupted by short recessions in which output drops. As growth became less volatile, it became less likely that the severe shocks required to create a recession would occur. Similarly, even if a recession did occur it was likely to be shallow. Indeed, these predictions appeared to be coming true. In the twenty years from 1985 to 2005 the US experienced two mild recessions, in comparison to the four generally deeper recessions in the previous twenty years.

As the reality of lower volatility came to be accepted, the debate moved to explaining this moderation. Three main explanations were put forward: structural changes in the US economy, good luck, or better monetary policy. Investigators generally placed some weight on structural changes coming from the adoption of “just in time” delivery as a result of the information technology revolution. The timing worked, as such techniques were introduced in the early 1980s. In addition, the fall in output volatility was accompanied by a reduction in the variability of production relative to sales, consistent with the view that firms were able to use technology to better synchronize production with the demand for their goods. By contrast, the role of the switch toward production of less volatile services (think restaurant meals versus cars) and deeper financial markets that made it easier for firms to ride out bad times by borrowing, were generally discounted. These were gradual trends, so why did output volatility fall so rapidly in the mid-1980s? In addition, the fall in volatility included sectors that would not have benefited from either trend.8

Good fortune was also acknowledged to play a role. Analysis indicated that most of the reduction in output volatility came from smaller shocks rather than from changes in behavior over the business cycle. Further support for the role of good luck came from the observation that the reduction in output variability seemed to occur equally over the short-, medium-, and long-run. This was what would be expected if good luck was the driving force, as the effects would be uniform regardless of the length of the interval being examined.9

As time went on, however, it became conventional wisdom to attribute the bulk of the great moderation to better monetary policy.10 The timing worked as the rapid reduction of US output volatility occurred soon after Paul Volcker was appointed Chairman of the Federal Reserve and abruptly diverged from the relaxed approach to inflation taken by the Federal Reserve in the 1970s. More generally, the synchronous fall in inflation and output volatility occurred at different times in different countries. Since tighter monetary policy was the driver of lower inflation, this provided strong evidence for a causal link between tighter monetary policy and the great moderation. In response to the observation that there was no change in behavior over the business cycle, proponents of the tighter monetary policy explanation argued that the earlier work looking at the business cycle had not included measures of inflation expectations. They argued that it was the anticipation of low and stable future inflation that was the main conduit through which tighter monetary policy had lowered inflation and reduced output fluctuations.

Under this view the surprise was less that a different monetary policy altered the business cycle, but that it could simultaneously improve outcomes for growth and inflation. It was already well established that the conduct of monetary policy could affect the business cycle. Indeed, a growing body of evidence measured how much the Federal Reserve and other central banks hiked or lowered policy rates in response to higher or lower inflation and growth. The associated theories concluded that if a central bank focused on reducing inflation volatility then this would be offset by an increase in output volatility.

The concurrent improvement in growth and inflation variability over the great moderation was ascribed to the fact that earlier monetary policies had been fundamentally flawed. In the 1970s, central banks had made the mistake of ignoring the impact of changes in expectations of future inflation on wages.11 As a result, the Federal Reserve and other central banks had reacted weakly to increases in inflation as they believed that they could permanently run a hotter economy with higher inflation and lower unemployment. In reality, however, this had proved to be a chimera as it assumed that workers’ expectations of future inflation remained unchanged. As inflation rose during the 1970s, however, workers anticipated that inflation would remain high and upped their pay demands beyond what the Federal Reserve predicted. The Federal Reserve thus presided over an upward spiral in wages and prices. When Paul Volcker took over the helm of the Fed in the early 1980s, his strong anti-inflationary stance had broken this spiral as workers realized that inflation would start to fall back to previous levels.

The lesson that policymakers took from this experience was to be skeptical of apparently stable empirical relationships, such as the trade-off between inflation and output that the Federal Reserve had tried to exploit. This was because such “reduced form” relationships that relied on data rather than on economic theory might change unexpectedly since they embedded implicit assumptions such as that inflation expectations were stable. This observation, made forcefully by Robert E. Lucas in 1988 and hence dubbed the Lucas critique, put the onus on developing more complex models in which relationships between (say) a household’s income and its spending were identified through sound theoretical models where households sought as much satisfaction as possible for a given income stream.

The resulting “dynamic stochastic general equilibrium models” (mercifully dubbed DSGE models for short) became an increasingly important part of how academics and more academically inclined policymakers thought about the economy. The models were, and remain, an impressive intellectual feat that combine deep theory with empirical evidence to create workable models of the economy. While more traditional empirical models based on estimated relationships with less theoretical support continued to be used, the smaller DSGE models were increasingly the preferred choice for policy analysis in the United States and Europe. The increased sophistication of the theory-based models led to a growing view that monetary policy was becoming more scientific, characterized by predictable responses to changes in inflation and economic slack that stabilized current activity by providing confidence that the economy would remain on an even keel. This is not to say that central bank committees literally used these models or the mechanical rules that they assumed policymakers followed. Rather, predictability and consistency were seen as increasingly important virtues for monetary policy.

The dark side of DSGE models, however, was that they tended to make policymakers complacent about risks to the stability of the economy. This partly reflected the limitations required to put together so many parts of theory—on the behavior of households, firms, financial markets, and governments—into a single model. Tractability often required specific behavioral assumptions to make the edifice work. More basically, however, these models almost uniformly assumed that households and firms were extremely far-sighted. They not only peered into the future, they peered into the far, far future. So, for example, most DSGE models assumed that tax cuts that boosted take home pay would not result in higher spending because lower taxes today would have to be paid for by higher taxes in the future. Understanding this, individuals choose to save the entire tax windfall in order to have the money to pay higher future taxes. This hyper-rational response, which assumed that everyone in the model understood how the economy worked, how policymakers reacted, and hence could rationally predict where the economy was headed in the future, was clearly out of step with reality. As discussed in the section on the efficient markets hypothesis, it is not clear that in a world of “unknown unknowns” such a predictable future even exists. If it does not, then the DSGE apparatus may be built on quicksand as people decide to use rules of thumb to navigate a complex and unpredictable world.

The flip side to the assumption that far-sightedness made tax cuts ineffective was that it made monetary policy extremely powerful. Monetary policy changes that were anticipated to occur even several years in the future fed back strongly onto current spending and activity. The implication was that as long as the central bank promised to respond gradually and sensibly to developments, it was extremely difficult to generate large booms or recessions even in the face of major shocks. A specific example in which a relatively standard DSGE model was used to analyze the impact of the Euro area crisis may help illustrate this point.12 In the model, the promise of easy future monetary policy created a sufficiently robust recovery that the European Central Bank started raising its policy rate above zero after a year or so. At the time that this paper was written, the ECB had kept policy rates at zero for six years. The stark contrast between the robust recovery in the simulation and the slow one in reality illustrates how DSGE models helped to mislead monetary policymakers into thinking that they had more influence over the economy than they actually did.

This belief in the efficacy of monetary policy was reinforced by the limited economic costs of the collapse of the technology bubble in global stock markets in 2001. The US Federal Reserve, in particular, ascribed the limited impact on the US economy to its swift monetary response. Consequently, rather than viewing the collapse in prices of overvalued equities as a warning sign of the risks from financial instability, the Fed came to believe that financial risks were limited. To quote from a speech given by vice-chair Kohn in 2006:

The health of the US financial system remained solid after the collapse of the high-tech boom … Moreover, the financial sectors of most other developed economies also weathered the worldwide drop in corporate equity values fairly well … I do agree that market corrections can have profoundly adverse consequences if they lead to deflation … but it does not follow that conventional monetary policy cannot adequately deal with the threat of deflation by expeditiously mopping up after the bubble collapses (emphasis added, double negative in the original).13

Symptomatic of this overconfidence in the effectiveness of monetary policy was the Fed’s attitude to the boom in house prices. In early 2006, in response to an article by the European Central Bank (ECB) suggesting that there were benefits from monetary policy responding aggressively to perceived house price bubbles, vice-chairman Kohn presented an extremely high bar for any additional monetary policy response to financial risks over and above the impact on inflation:

To wrap up this critique, I summarize as follows: If we can identify bubbles quickly and accurately, are reasonably confident that a tighter monetary policy would materially check their expansion, and believe that severe market corrections have significant non-linear adverse effects on the economy, then extra action may well be merited. But if even one of these tough conditions is not met, then extra action would lead to worse macroeconomic performance over time than that achievable with conventional policies that deal expeditiously with the effects of unwinding bubbles when they occur (emphasis added).14

The message was clear. The Fed thought that it could cope with a major downturn in house prices on its own and without the need to respond preemptively to potential bubbles.

The North Atlantic crisis demonstrated that this assessment was wrong. The limited impact of the collapse of the tech bubble in 2001 reflected the nature of the financial shock rather than the relative sophistication of financial markets. In the global tech bubble investors mainly bought shares with their own money so that, while painful, the resulting losses could be absorbed by the buyers. Things got much more complicated in 2008 because home buyers had typically only put down a small part of the purchase price of the house and borrowed the remainder using a bank mortgage. This “leverage”, where initial seed money (or, on occasion, no such money) was augmented by bank loans meant that if the borrower was unable to pay the loan, the problems cascaded back to the bank. The knock-on effects of leverage to undercapitalized banks amplified and widened the losses from the housing collapse to the entire North Atlantic economy.

A second feature of DSGE models that led policymakers astray was that they assumed that excess spending would be reflected in inflation and slack. For all of the complexities of the DSGE models, the regular waxing and waning of spending and activity over the business cycle was basically ascribed to the slow response of inflation to changes in economic slack. As a result, all that was needed to stabilize activity was for monetary policy to respond in an appropriate manner to changes in current and future inflation.

Unfortunately, the imbalances that were building up in the United States and the Euro area periphery over the 2000s were not seen primarily in inflation. The booms primarily involved excessive borrowing, much of which was used to buy houses. The resulting increase in house prices, however, had little impact on the measures of consumer price inflation that the central banks focused on. In the United States, the consumer price index used rents rather than new house prices to estimate the cost of housing as this is (correctly) seen as more direct measure of the price of shelter. As rents did not take off in the same way that house prices did, the impact of the housing bubble on consumer price inflation was muted. In the European Union, house prices had no direct impact on consumer price inflation as, in the absence of a uniform way of measuring dwelling costs across the member countries, the consumer price index used by the ECB excluded any measure of housing costs. In addition, much of the additional spending on goods was satisfied by higher imports. This manifested itself in rapid increases in trade deficits rather than in higher domestic inflation. Central banks underestimated risks from the growing financial imbalances because they were focusing on inflation and slack rather that financial imbalances due to excess borrowing. They were looking under the wrong lamppost.

As a result of this intellectual myopia, policymakers were forced to improvise when activity remained in the doldrums in the wake of the North Atlantic crisis even after monetary policy rates had been lowered to zero. The great moderation convinced central banks that they could stabilize the economy on their own. As a result, macroeconomics largely devolved into the analysis of conventional monetary policy, while the analysis of fiscal, financial, and structural policies, as well as unconventional monetary policies, were given much less attention as they came well down the academic and policy totem pole. Monetary analysis begat more monetary analysis, to the detriment of a wider view of macroeconomics. When the crisis came, this framework proved wanting, a deficiency compounded by a loss of interest in international policy cooperation.

* * *

How Benign Neglect Undermined International Policy Cooperation

While the efficient markets hypothesis was a theory and the great moderation was an observation, benign neglect illustrated yet another way to get things completely wrong, namely a framework. Benign neglect was not a phrase initially coined by economists, in contrast to the efficient markets hypothesis and the great moderation. It was first used in a January 1970 memo from Senator Daniel Patrick Moynihan to President Richard Nixon when Moynihan was serving as Nixon’s Councilor for Urban Affairs. It reflected a rejection of the free-spending approach of the Great Society program as a way of solving US racial problems. In Moynihan’s somewhat cynical view, what was needed was an organic progress without rhetoric.

In the realm of international economics, benign neglect refers to the view that countries should look after their own internal affairs and that the benefits from cooperating with other countries are too small to be worth the trouble. In particular, exchange rates across countries and resulting trade balances are better determined by private markets. In many ways, it is an extension of Adam Smith’s insight that markets and competition can create an efficient economy, but extended to countries rather than individuals. It had its intellectual roots in the resurgent belief in market economics in the 1980s associated with President Reagan and Prime Minister Thatcher.

Benign neglect represented a major shift from the thinking of the earlier postwar period. With the memory of the international instability of the 1930s Great Depression in mind, economic cooperation and the avoidance of negative spillovers from exchange rate devaluations was central to the design of the Bretton Woods exchange rate system. In this system, described in more detail in the next chapter, countries kept their exchange rates fixed against the US dollar, which itself was fixed against gold and (at least in theory) any changes in parity against the dollar needed to be discussed with the international community via the International Monetary Fund.

The system was supported by an intellectual framework in which countries could use a combination of fiscal and monetary policy to achieve full employment and a desirable trade balance. The key insight was that, in response to excess economic slack, either a looser monetary policy or a larger government deficit could be used to achieve full employment (“internal balance”), but that the two policies had contrasting effects on the trade balance. Cutting short-term interest rates to stimulate activity via monetary policy would tend to lower the exchange rate, increase exports, and create a larger trade surplus. By contrast, running a larger government deficit for the same purpose would raise interest rates, appreciate the exchange rate, reduce exports, and lower the trade balance. A judicious use of the two instruments allowed policymakers to keep the domestic economy in sync while also achieving a desired trade balance. The two macroeconomic policies (monetary and fiscal) could combine to achieve the two objectives (full employment and a desirable trade balance).

Support for this integrated policy framework started to fray soon after the break-up of the Bretton Woods system in the early 1970s as belief in the active use of fiscal policy waned. This came in large part from a backlash against the increased size of government and rapid rise in government debt in the 1960s and 1970s. The focus of fiscal policymakers switched from fine-tuning the government deficit so as to achieve full employment to longer-term objectives such as reducing the size of government and the associated tax bill. This was accompanied by a revival of the “Ricardian equivalence” hypothesis in economics, which held the view that if people could lend and borrow easily, tax cuts or hikes had small effects on spending as households and firms would simply save the money from (say) a tax cut to pay the resulting higher future taxes. As discussed earlier, this type of thinking was embodied in DSGE models.

Questions about the efficacy of fiscal stimulus was accompanied by an erosion in the belief that policymakers needed to worry about the trade balance. Coming immediately after the financial turmoil of the 1930s, the Bretton Woods system had been sympathetic to government-imposed constraints on the transfer of money across borders. As time went on, however, these constraints started to be lifted and money flowed more freely, particularly across rich and financially advanced economies. More open international financial flows made it easier for countries to borrow to finance trade deficits and lend the money required to maintain trade surpluses. As access to international markets became more routine, concerns about the risks from trade deficits began to wither. In particular, the growing US trade deficit remained easily financed.

Another element driving the popularity of “benign neglect” was the growing conviction that spillovers between countries were small. The limited role of financial shocks in policy models left trade as the main economic link across economies. Trade is a substantial part of economic activity—in the Euro area, for example, exports and imports each comprise almost 40 percent of output, although it is a more modest 15 percent in the United States. Even in Europe, however, apart from some specific close trade relationships (such as Dutch exports to Germany), trade between any given pair of countries is much smaller. For example, even if exports to another country comprise a hefty 5 percent of output, a fall in activity in the foreign country would only cause a domestic output loss of some 5 cents in the dollar. This is not large enough to transmit significant shocks, which weakened the perceived need for messy and complex international negotiations.

The lack of attention to financial market spillovers left many of the risks to the international system undiagnosed. For example, financial globalization was seen as beneficially linking international borrowers and lenders rather than creating potentially dangerous financial booms and busts across countries. As a result, policymakers in the Euro area core spent little time worrying about the rapid increase in international exposures of their large banks. The lack of attention to financial spillovers also explains why the European Central Bank decided that it could safely ignore the growing problems in US financial markets and hike rates in July 2008.

Benign neglect gained in popularity even though an initial experiment by the Reagan Administration had to be reversed. Soon after the 1980 election, the new Reagan Administration adopted an aggressively laissez faire approach to international economic policy. However, the steep appreciation of the dollar between 1980 and 1985, buttressed by an expansionary fiscal policy and tight monetary conditions, triggered protectionist pressures in the US Congress that forced a change in course. This resulted in what many regard as an apogee of international policy coordination, the September 1985 Plaza Agreement, in which the five largest advanced economies agreed to intervene to lower the value of the dollar.15 While the dollar had already started to gradually depreciate over the spring and summer of 1985, the Plaza Agreement led to a much more rapid fall.

Indeed, by early 1987 the dollar was back around its 1980 value and seemed destined to continue depreciating. In response, the same five countries plus Canada negotiated the Louvre Accord of February 1987 to stabilize exchange rates. Unlike Plaza, the Louvre Accord included explicit macroeconomic policy commitments including tax reforms by the German government, monetary and fiscal stimulus by the Japanese government, and a trimming of the US fiscal deficit. However, these relatively loose policy commitments started to create tensions among the signatories. These tensions overflowed when US policymakers did little to oppose renewed dollar depreciation after the stock market crash later in 1987. Support for benign neglect rose as this discord ended the last significant attempt at international policy coordination before the North Atlantic crisis.

The decline in importance of economic cooperation after 1987 is vividly illustrated by the communiqués of successive G7 leaders’ summits.16 The communiqués summarized the views expressed in the most important forum for discussing international economic cooperation. The 1987 Venice G7 summit, held at around the time of the Louvre Accord, produced an Economic Declaration of thirty-five paragraphs of which the first twenty paragraphs were devoted to the global economy. Paragraph 3 stated: “Since Tokyo, the Summit countries have intensified their economic policy coordination with a view to ensuring internal consistency of domestic policies and their international compatibility” (emphasis added). The three noneconomic declarations were all much shorter (the longest was eight paragraphs). By contrast, at the 2008 Hokkaido Toyako G7 summit, held in the hiatus between the rescue of the US investment bank Bear Stearns and the collapse of Lehman Brothers, only ten of the seventy-two paragraphs in the leaders’ declaration were devoted to broad economic issues, and the only policy commitment was to “promoting a smooth adjustment of global imbalances through sound macroeconomic management and structural policies in our countries as well as in emerging economies and oil producing countries”. As these quotes make abundantly clear, over the intervening twenty years the leaders of the seven most powerful rich nations had essentially given up on organized economic cooperation.

Benign neglect can also be seen in the lack of interest in updating the international monetary system in response to changing realities. For example, the resources of the International Monetary Fund (IMF)—the cornerstone of the international financial safety net—did not keep up with the increase in world trade and (in particular) the expansion of global financial flows. In addition, despite some attempts to recognize the growing economic clout of poorer emerging markets, the most important global economic clubs remained the province of rich nations. The Hokkaido Toyako G7 summit did include a joint statement by the leaders of Brazil, China, India, Mexico and South Africa and there were regular meetings of the finance ministers of the G20, a more representative group that included the major emerging markets including China. But the main forums for international macroeconomic and financial dialogue remained the province of rich countries, be it the G7, the Basel Committee on Banking Regulation, or the Financial Stability Forum.

Trade was a major exception to this pattern but also illustrated the limitations of universal membership. In 1995, the loosely organized General Agreement on Tariffs and Trade (the GATT) was replaced by the World Trade Organization (WTO), whose wide membership was further enhanced by the accession of China in 2001 and Russia in 2002. The WTO proved a useful venue for dispute settlement, one of the most formalized parts of global economic cooperation. However, the complexities of a near-universal membership were exposed in the tortuous negotiations of successive rounds of international trade liberalization. This culminated in the seemingly never-ending wrangling over the 2001 “Doha” round. While Doha negotiations were advertised as being about supporting developing countries, they floundered on disagreements on the relative roles of advanced economies and emerging markets in improving the trading system.

The loss of interest in policy cooperation was particularly unfortunate as it played out against the rapid growth in the importance of emerging markets in the global economy. As measured by the IMF, the size of the European Union, home to four of the seven members of the G7, contracted from 30 percent of world output to 20 percent between 1980 and 2007, even as the clout of emerging Asia, unrepresented in the G7, expanded from less than 10 percent of the world output to over 20 percent. In particular, the spectacular rise in China as a global economic force was treated as a threat by the western nations, with the United States focusing on countering their fixed exchange rate and rising trade surplus. Indeed, in 2007, with strong US backing, the IMF approved a new framework for surveillance that was seen by virtually all outside commentators as an anti-China move.

Benign neglect also had an impact on European policymakers. Enforcement of the fiscal rules embodied in the Stability and Growth Pact had already started on the back foot, as the major economies allowed a lax approach to the choice of the initial members of the European Monetary Union. However, the lack of attention to cross-country spillovers fostered by benign neglect further eroded the desire to strictly supervise the rules. The 3 percentage points of output cap for government deficits and 60 percent of output ceiling for debt embodied in the pact gradually moved from an upper limit to more of a target. The European Commission was given few powers to enforce the rules. These deficiencies were highlighted in 2009 when the Greek government’s admission of much higher deficits and debt than had earlier been reported led to a Euro area fiscal crisis.

On the eve of the North Atlantic crisis, global economic governance remained the province of rich economies. The global safety net had been allowed to wither while emerging markets felt that their legitimate interest in playing a more important role had been largely ignored and China, the most important member of this group, saw the system as rigged against it. Meanwhile, the Euro area did not really have the kind of solid surveillance that might have spotted increasing fiscal malfeasance. Across the North Atlantic, the main consequence of benign neglect was that policymakers missed the implications of increasing external financing of the US and Euro area periphery housing booms.

* * *

How Economic Models Distorted Policymaking

The properties of economic models have come up repeatedly in this account of how prevailing macroeconomic views lulled policymakers into missing growing risks to the North Atlantic economy. This is intentional as in many ways these models reflected the conventional orthodoxy of the time among policymakers, academics, and pundits.17 This is not to say that policymakers and commentators necessarily understood the detailed structure of these models, or relied heavily on the results they produced. The point is rather that these models embodied and reinforced views about the economy that created intellectual blinkers, that both allowed imbalances that generated the North Atlantic crisis to build and produced inflexible policy structures that amplified its costs.

Part of the problem was that the DSGE models that were used extensively by economists had only a limited relationship to the underlying data. As has been emphasized by others, this deficiency was obscured by the use of complex estimation techniques. The end result was that these models tended to reflect the perspectives of their creators—so, for example, monetary policy turned out to be extremely powerful in the models built by central banks but less so in those built by academics.18 More generally, DSGE models reflected the characteristics of the underlying economic models on which they were built, including an emphasis on sophisticated and hyper-rational responses to anticipated future events. This contrasts to more eclectic approaches that assume that people develop simple stories and rules of thumb to guide them through the complexities of the modern economy. As discussed earlier, such behavior may be rational if “unknown unknowns” make accurate calculations of the future impossible.

An important consequence was the growing belief that the economy was largely self-correcting and that policymakers could safely specialize in their own macroeconomic compartments. Monetary policymakers focused on keeping inflation and economic growth steady, fiscal policymakers on keeping debt sustainable, while financial stability and underlying growth were largely taken for granted and the need for international cooperation heavily discounted. The narrow focus of policymakers provides another perspective on how the growing financial risks and imbalances of the 2000s were missed, why the bursting of the financial asset bubbles later in the decade came as such a surprise, and why the policy response was so muddled.

Another consequence was that fiscal policy came to be seen as largely ineffective, shifting the focus of policy to central banks. The consensus in the Bretton Woods era that both monetary policy and fiscal policy had a role to play in stabilizing the economy was increasingly replaced by a view that monetary policy should be the main instrument to stabilize economic fluctuations. Fiscal policy might assist monetary policy by providing “automatic stabilizers”, the process by which, during a downturn in activity, government spending automatically increases while tax revenues decrease, thereby pumping more money into the economy. Active use of fiscal policy, however, was seen as largely ineffective and, to the extent that it was effective, potentially dangerous, given a political process in which fiscal support took a long time to plan and was often difficult to reverse.

In addition, predictability became a cardinal virtue for monetary policymakers since this reduced financial market uncertainty. The possible impact of predictability in encouraging more risk-taking in the financial system was recognized but largely discounted. While there was some debate about the risks from the “Greenspan Put”, whereby the Fed’s prompt easing in response to negative news might encourage investors to think they would always be bailed out and hence start taking larger risks, such concerns about the incentives of investors were not analyzed in depth as they did not fit into the mental structure associated with macroeconomic models.

The design of European Monetary Union in the early 1990s provides a good example of how the intellectual weaknesses affected the policy architecture. As discussed in the last chapter, the plan for monetary union was produced by the Delors Committee that was dominated by central bankers, and hence closely followed the prevailing orthodoxy in macroeconomics. Monetary policy was to be run by an independent central bank whose only mandate was to stabilize inflation and whose actions were free from political interference. Day-to-day financial regulation was left in the hands of decentralized national supervisors so that it would not interfere with the central bank’s independence. Fiscal policy was also national, and flexibility was crimped by rules aimed at lowering debt and constraining deficits. This structure, which looked so good on paper, proved completely inadequate in the face of the financial and fiscal crisis that enveloped Europe from 2009 to 2012.

More generally, DSGE models provided powerful subliminal messages that fed the belief that government intervention and regulation were generally harmful. It has already been noted that most of these models found that cutting taxes or increasing transfers (such as welfare payments) had no impact on private spending. Rather, taxes led to inefficiencies by distorting prices while direct government spending on roads and such like was generally modeled as having no value. The implicit message was that government should be limited, in line with the prevailing intellectual climate after the Thatcher/Reagan revolution in the early 1980s. To take another example, the representative household and firm structure that was predominant in DSGE models was blind to issues of inequality. Since all households acted in the same manner there was no role for welfare payments to the poor to have a different impact than tax cuts to the rich.

The crisis has led to an acknowledgement of importance of financial risks, but policymakers are still struggling to put these risks into an effective overarching framework. It is striking, for example, that the DSGE models that are being currently used in policy analysis have most of the blind spots of their predecessors. To be fair, some more complex versions of these models have made progress in incorporating financial markets.19 However, even in these cases it is not clear that the true essence of financial markets have been captured. By their nature, these models generally involve smooth transitions—the central bank raises rates and the rest of the economy responds gradually to the new incentives. By contrast, financial markets are by their nature unpredictable and volatile. This is because the traders who dominate transactions are always looking for new ways to see the future that will give them an edge over the competition. This forward-looking focus leaves financial markets in a constant state of flux.

A parallel with physics may be useful here. For almost a century physics has been dominated by two theories that have yet to be reconciled. On the large scale, the world is characterized by Einstein’s theory of general relativity in which changes in one variable create smooth transitions elsewhere. At the micro scale, the world is characterized by quantum mechanics in which particles jump from one state to another in a jerky fashion as they gain or lose quanta. Both theories are excellent descriptions of their particular areas, but attempts to unify them have fallen foul of fundamental differences in their structure.20

In many ways, standard macroeconomics, with its predictable and smooth changes in behavior, can be seen as similar to relativity while financial markets, which can change abruptly from calm to panic, are more like quantum mechanics. Just as physics has learned to live with these two views of the world, it may be helpful for economists and policymakers to do the same and accept that there are insights from conventional macroeconomics, and that there are different but equally valid insights from the analysis of financial markets, so that a balanced policy requires giving both aspects due attention.

This has obvious implications for the structure of policymaking. For example, central bank committees should be populated by a mixture of macroeconomists and those with financial expertise, while the chair should remain neutrally placed between the two camps and assess the merits of both points of view. It also implies a much greater focus on testing how robust the economic system is to large, unexpected financial shocks and the potential role of monetary, fiscal, financial, and structural policies in limiting vulnerabilities and responding to unanticipated events. The adage “hope for the best but plan for the worst” comes to mind.

Macroeconomic theory and practice has come a long way since the early 1980s. Most of these changes have been beneficial—for example, the lower volatility of output seen during the great moderation appears to be continuing after the crisis, suggesting that better monetary policy has indeed provided lasting benefits to the economy. But there is a very real issue as to how the North Atlantic crisis could have come as such a surprise to the vast majority of observers and policymakers—the rare exceptions being those who saw financial imbalances as a clear and present danger either because of their background in emerging markets (such as Professor Nouriel Roubini of New York University) or their belief that markets were often irrational and needed to be closely supervised (such as Bill White of the Bank of International Settlements, Professor Robert Shiller of Yale, and IMF Economic Counsellor Raghuram Rajan).

The inadequate intellectual apparatus was not simply unfortunate in the sense that it allowed the crisis to build. It also meant that policymakers needed to respond hurriedly to unexpected challenges within badly designed structures. This led to policy missteps such as the bankruptcy of Lehman Brothers and the premature tightening of fiscal policies in the Euro area. Most worrying of all, many of the implicit beliefs contained in the great moderation, efficient markets hypothesis, and benign neglect remain important components of conventional macroeconomic models and thinking. As discussed in a later chapter, a more radical overhaul of macroeconomics is overdue that accepts the fundamental role of policy cooperation, uncertainty, rules of thumb, and inclusiveness.

    Other Resources Citing This Publication