The Use and Abuse of Taylor Rules: How Precisely Can We Estimate Them?

Contributor Notes

Author(s) E-Mail Address: acarare@imf.org and rtchaidze@imf.org

This paper draws attention to inconsistencies in estimating simple monetary policy rules and their implications for policy advice. We simulate a macroeconomic model with a backward reaction function similar to Taylor (1993). We estimate different versions of a policy rule, using these simulated data. Under certain circumstances, estimations document an illusionary presence of a lagged interest rate, or of forward-looking behavior. Our results are consistent with the fact that several authors found very different versions of monetary policy rules, all fitting the U.S. data well. We also survey the literature, providing a list of issues complicating practical use of Taylor rules.

Abstract

This paper draws attention to inconsistencies in estimating simple monetary policy rules and their implications for policy advice. We simulate a macroeconomic model with a backward reaction function similar to Taylor (1993). We estimate different versions of a policy rule, using these simulated data. Under certain circumstances, estimations document an illusionary presence of a lagged interest rate, or of forward-looking behavior. Our results are consistent with the fact that several authors found very different versions of monetary policy rules, all fitting the U.S. data well. We also survey the literature, providing a list of issues complicating practical use of Taylor rules.

I. Introduction

In recent years, evaluation of simple policy rules has become one of the most common exercises in the economic literature, especially since the publication of John Taylor’s paper in 1993.2 Taylor demonstrated that a simple reaction function, later known as the Taylor rule, with a policy instrument (a short-term interest rate) responding to movements in fundamental variables (inflation and output gap), follows closely the observed path of the U.S. federal funds rate in the late 1980s and early 1990s.

This vast, especially for the United States, literature on simple policy rules has yet to settle on the empirical benchmark rules that could be used for policy recommendations. As of now, different authors have found different monetary policy rules that all fit the data well but correspond to different interpretations of the same underlying monetary policy decision-making process, as illustrated in Table 1 and detailed in the Appendix.

Table 1.

Short List of Proposed Rules for the U.S. Data

article image

These rules describe the same country (the United States) and the same period (late 1980s and 1990s) and are supported by good data fit, figures, and quotes from speeches made by policymakers that explain the U.S. monetary policy setting as suggested by these rules. Nevertheless, they are very different from each other − in terms of the functional form (use of output or unemployment gaps, inclusion of lagged interest rate and/or unemployment growth), timing of the fundamentals (contemporaneous or expected), and the values of the response coefficients.

We believe that this happens because current methods do not allow researchers to distinguish properly between these proposed rules and, hence, may lead them to wrong conclusions about the nature of the monetary policy.

The contribution of this paper to the literature is twofold. First, we survey the literature, with a focus not only on how these rules are evaluated and used for policy recommendations, but more important, on how these rules are abused. Second, using Monte Carlo simulations, we demonstrate how estimating the same data produces different policy rules, as in Table 1.

By “abuses of policy rules” we mean giving policy advice or making projections based on benchmark rules either selected for wrong reasons or incorrectly estimated. We provide a list of theoretical and empirical problems documented in the literature, emphasizing practical difficulties one would experience in using Taylor rules. In this regard, our paper could be thought of as a counterpart to Svensson’s 2003 paper expressively titled “What Is Wrong with Taylor Rules?”

In order to demonstrate how estimating the same data produces different policy rules, we simulate inflation, the output gap, and short-term interest rates series, using a simple macroeconomic model where short-term interest rates depend only on lagged values of the output gap and inflation. The data obtained are used to estimate alternative policy rules commonly found in the literature. We do find a forward-looking rule, i.e., one driven by expectations of the fundamentals, rather than by their lags; as well as a rule with interest rate smoothing, i.e., a rule that includes its lagged values among the fundamentals. We also find rules that respond to other variables, such as output gap growth or inflation differential.

Evaluating monetary policy reaction functions is very much like searching for a black cat in a dark room, not knowing whether the cat is even there for sure. In our Monte Carlo simulation exercise we look for a black cat in a brightly lit room, knowing that it is there, and yet we could not distinguish it from a dog or an elephant. But our findings are consistent with Orphanides (2001), who demonstrates that changing the horizon of the fundamentals in the rule may lead to changes in the estimated coefficients, as well as Rudebusch (2002), who argues that presence of a serially correlated policy shock in the policy rule is observationally equivalent to a rule with no policy shock but interest rate smoothing.

Our results have significant implications, as estimated rules often form a basis for policy advice. A policymaker incorrectly advised to smooth an interest rate path may end up accommodating an inflationary shock instead of fighting it. Private agents incorrectly believing in a certain type of Taylor rule to be the guide for monetary policy conduct, will misjudge future interest rates and make incorrect investment decisions.

Overall, rather than criticizing the simple policy rules literature, we aim to point out a range of issues that should be taken into account if Taylor rules are to be used as a basis for policy advice. As Svensson (2003, p.429) notes, “no central bank has so far made a commitment to a simple instrument rule like the Taylor rule or variants thereof. Neither has any central bank announced a particular instrument rule as a guideline.” Nevertheless, a vast industry of academicians, journalists, advisors, consultants, and even practitioners of an academic nature, argues about the central bank behavior using Taylor rules as a basis.3 We suggest that many sophisticated rules found in the literature may be a statistical misrepresentation of policy rules, that in reality might be less sophisticated, but more appealing in practical terms.

The paper is organized as follows. Section II surveys the existing literature, describing the Taylor rule, its modifications, its uses as well as its abuses. An eager reader, however could jump straight to Section III, which describes the empirical exercise - the simulation procedure and the results. Section IV concludes.

II. Taylor Rules: A Literature Survey

A. The Taylor Rule and Its Modifications

The best-known simple instrument rule is the Taylor rule, where the instrument – the nominal short-term interest rate – responds only to inflation and to the output gap. Taylor (1993) suggested this rule as an explanation of the monetary policy setting for the early years of Alan Greenspan’s chairmanship of the Board of Governors of the U.S. Federal Reserve System, thereafter “the Fed” (1987–92). Since the rule described a complicated process in very simple terms and fitted the data very well, it quickly became very popular. We start by describing the original Taylor rule (1993) and present the modifications it has since undergone.

The Taylor rule (1993) is defined as

it=r*+πt+Cπ(πtπ*)+Cyyt=C+(1+Cπ)πt+Cyyt(1)

where it is the short-term nominal interest rate in period t; r* is the real interest rate target; πt - π* is the “inflation gap,” a difference between actual inflation πt and inflation target π*; yt=logYt -logYt* is the output gap, where Yt is real GDP and Yt* is potential output;4 and the coefficients Cπ and Cy are positive. In the original Taylor (1993) formulation, Cπ and Cy were both 0.5, the inflation and real interest rate targets were 2 percent each, and hence the constant C was equal to 1.

The original Taylor rule has undergone various modifications as researchers have tried to make it more realistic or appropriate. In this paper we will limit our presentation to those modifications suited for closed economies and rules not based on asset prices,5 since these are the ones most commonly used. The discussion and conclusions of the paper, however, are expected to apply to all types of simple monetary policy rules. We document six of these modifications, as well as basic theoretical explanations for the modifications.

One modification to the original rule has been to incorporate forward-looking behavior in order to counteract the seeming shortsightedness of policymakers, making the short-term interest rate a function of central bank expectations of output gap and inflation rather than their contemporaneous values.6

An alternative modification has been to introduce lags of inflation and output gap. It has been pointed out in the literature that because it is not possible to know the actual output gap and inflation at the time of setting the interest rate, using lags would make the timing more realistic (McCallum, 1999a).7

Interest rate-smoothing behavior (including a lagged short-term interest rate among the fundamentals) is the single most popular modification of the Taylor rule. Clarida, Gali, and Gertler (1999) note that although the necessity of including an interest rate-smoothing term has not yet been proven theoretically, it seems rather intuitive for several reasons.8

As simple as it is, the Taylor rule cannot possibly take into account all the factors affecting the economy. Policymakers are known to react not only to movements in the output gap and inflation, but also to movements in the exchange rate, stock market, and political developments, etc. The way to capture this issue would be to introduce a new variable, a so-called policy shock variable, reflecting the judgmental element of the policymaking process.

Some authors suggest the use of unemployment gap as opposed to output gap, to improve the fit of the data, as suggested by Taylor (1999) and Orphanides and Williams (2003). This modification reflects Okun’s law (1962), which links the output gap and the unemployment gap. This type of rule tends to perform quite well in terms of stabilizing economic fluctuations, at least when natural rates of interest and unemployment are accurately measured.9

Finally, it has been suggested to use rates of growth of unemployment, or of the output gap, to account for measurement errors in the real-time estimates of the natural rate of unemployment and/or output (McCallum, 1999a, and Orphanides and Williams, 2003).

B. Uses and Abuses

Taylor rules have been widely used in theoretical and empirical papers, with the latter examining the rules both from descriptive and prescriptive points of view.

The focus of research in theoretical papers has been on whether simple rules solve the time inconsistency bias (McCallum, 1999a); whether they are optimal (McCallum, 1999a; Svensson, 2003; Woodford, 2001; etc.);10 and on how they perform in different macroeconomic models (Taylor, 1999; Isard, Laxton and Eliasson, 1999).11

As for the empirical papers, those with a descriptive point of view include analysis of various specifications and estimations of the Taylor rule (Clarida, Gali and Gertler, 1998; Kozicki, 1999; Judd and Rudebusch 1998; etc.). These studies examine particular historical episodes and address two questions: to what extent are simple instrument rules good empirical descriptions of central bank behavior; and what is the average response of the policy instrument to movements in various fundamentals?12

Empirical papers with a prescriptive point of view suggest what the interest rate should be (McCallum, 1999a and 1999b; Bryant, Hooper, and Mann, 1993; Taylor, 1999), or how it should be set. Commonly, these suggestions are based on rules that are either the outcome of theoretical papers or the result of estimating “good/successful” periods of monetary policy. The potential abuses in prescriptive papers are mainly related to the choice of the benchmark rules, whether based on theory or empirical evidence. In the two following subsections, we document and provide a brief description of the problems that might arise when choosing such rules.

C. Theoretical Choice of a Benchmark Rule

Policy advice based on rules from theoretical models comes from rules simulated or derived in a model or class of models considered representative of the economy. There are potential problems with this approach as documented in the literature and surveyed below.

  • Svensson (2003) and Woodford (2001) warn that commitment to simple rules may not always be optimal, as a simple policy rule may be a solution too simple for a task as complex as that of a central bank.13

  • Simple policy rules may not be robust across different models. Due to uncertainty about the true model of the economy and/or potential output levels, the most recent theoretical efforts have concentrated on suggesting a set of robust simple rules that could be used as a basis for policy advice, as in Giannoni and Woodford (2003a and 2003b), Svensson and Woodford (2004), Walsh (2004), etc. Isard, Laxton and Eliasson (1999) show that several classes of Taylor type rules perform very poorly in moderately nonlinear models.

  • Several recent papers show that, when the central bank follows Taylor type rules in sticky price models of the type that fit the U.S. data well, the price level may not be determined, and there could be several paths for the instrument and multiple equilibria, all coming from the same model with the same rule (Benhabib, Schmitt-Grohe, and Uribe, 2001; Carlstrom and Fuerst, 2001; etc.).

  • How policymakers should respond to the presence of measurement errors is a question with no firm answer yet. While some researchers advocate a more cautious approach, with smaller response coefficients (Orphanides, 2001), others advocate a more aggressive approach (i.e., with larger coefficients) to policymaking (Onatski and Stock, 2002). Finally, some studies have argued in favor of “certainty equivalence,” which implies no changes in policymakers’ behavior and response coefficients (Swanson, 2004).

  • Most theoretical papers talk about inflation in rather generic terms. Thus, when it comes to policy prescriptions, it is not clear what particular measure should be used − Consumer Price Index (CPI), core CPI, CPI less food and energy, GDP deflator, etc. Even after a particular index is chosen, there are more choices to make: annual or quarterly; if annual, is it the average of quarterly numbers or a growth rate over the four quarters? Is the growth in CPI calculated as a log difference or a ratio? Even though the differences between these various calculations could be minimal in a case of low and stable inflation, one should be aware of these caveats. Similar issues arise when it comes to measuring the output gap.

  • Any formula-based recommendation is bound to ignore the judgmental element, which reflects policymakers’ account of other developments not reflected in the output gap or inflation behavior.

D. Empirical Choice of a Benchmark Rule

Policy advice based on rules from empirical papers comes, usually, from estimating a period that is considered “good” or “successful” in combating inflation, promoting output growth, or both. Like the theoretical approach this empirical approach brings with it several problems.

  • Rogoff (2003) notes that it is not clear how much credit policymakers deserve for the exceptionally good performance of many economies in the last 15 years or so. He notes that the achievement of price stability globally may be due not only to good policymaking but also to the favorable macroeconomic environment. The main cause that he identifies is globalization, which through increased competition has put a downward pressure on prices.

  • Stock and Watson (2003) also argue that improvements in the conduct of monetary policy after 1979 are only partially responsible for reducing the variance of output during business cycle fluctuations. This could have been caused by “improved ability of individuals and firms to smooth shocks because of innovation and deregulation in financial markets” (Stock and Watson, 2003, p. 46). They also note that during this period, macroeconomic shocks were “unusually quiescent” (p. 46).

  • Even if one finds empirically a Taylor rule, it does not imply that it is the basis of the monetary policy decision making. The empirical relationship found may be a reflection of something else – a long-term relationship among the nominal interest rate, inflation, and the output gap,14 or a reflection of a completely different kind of monetary policy.15

  • Also as Svensson and Woodford (2004, p. 24) note, “Any policy rule implies a ‘reaction function,’ that specifies the central bank’s instrument as a function of predetermined endogenous or exogenous variables observable to the central bank at the time that it sets the instrument.” They warn that this “implied reaction function” should not, in general, be confused with the policy rule itself (page 24) (see also footnote 12).

  • When making policy prescriptions, can one really impose the implied response coefficients and targets of one economic or policy regime on another, without accounting for changes in the structure of the economy? Greenspan particularly has warned about this abuse on several occasions, including in January 2004 in his speech to the American Economic Association (AEA) meetings: “Such rules suffer from much of the same fixed-coefficient difficulties we have with our large-scale models.”

  • Even though there may be no changes in the economy, there may be changes in the attitude of policymakers. Such changes could be reflected in a shifting of targets for real interest rate or inflation (which, in terms of Taylor rules, translate into a different constant), or there may be changes in the weights that policymakers assign to inflation variance and output gap variance (which, in terms of Taylor rules, translates into different inflation and output gap response coefficients).

  • Coefficients might not be estimated with a very high degree of precision, and standard errors could be quite large. Once the size of the confidence intervals is taken into account, the policy recommendations on how the instrument should be set could become blurred.

  • While coefficients may be estimated for very particular measures of inflation and/or the output gap (for example, CPI less food and energy and Hodrick-Prescott (HP) detrended log output), it is the values only that get “remembered.” When policy recommendations are made, these coefficients may be coupled with different measures (for example, GDP deflator and linearly detrended log output, which in general results in larger values for the output gap than the HP detrending), without taking into account that the coefficients would have been different had these alternative measures been used for the estimation.

  • As mentioned above, any formula-based recommendation is bound to ignore the judgmental element, which is an important factor behind policy decisions.

  • Finally, can we actually estimate the rules properly? As the next section argues, the answer is “not really,” at least not with the methods commonly used today.

E. Estimating Taylor Rules

The rules are usually estimated using either ordinary least squares (OLS), if they are backward looking (see, for example, Orphanides, 2001), or instrumental variables and generalized method of moments (GMM) if they are forward looking (see, for example, Clarida, Gali, and Gertler, 2000), and it is not obvious that the following econometric problems are addressed properly, or always taken into consideration:

  • The most obvious econometric question is how to deal with high serial correlation of the variables. The common recipe is to use Newey-West standard errors and serial correlation robust estimators in order to account for heteroschedasticity, and instrumental variables to account for the forward-looking rules. What is worth noting, however, is that while papers estimating Taylor rules commonly treat interest rates as stationary series, most term structure and money demand papers treat interest rates of various maturity as I(1) series,16 which would call for different econometric techniques.

  • The estimates are not very robust to differences in assumptions or estimation techniques. Jondeau, Le Bihan, and Galles (2004) show that, over the baseline period 1979–2000, alternative estimates of the Fed’s reaction function using several GMM estimators17 and a maximum likelihood estimator yield substantially different parameter estimates. Estimation results may also not be robust with respect to sample periods, to different sets of instrumental variables, or to the order of lags (when lags of variables are used as instruments).

  • In addition, estimation of the Taylor rules very often requires inputs from separate estimation exercises, such as an evaluation of the output gap or NAIRU. These procedures are subject to the same kinds of problems, and, hence, the level of uncertainty around coefficients doubles.

  • As in other empirical papers, making policy recommendations based on rules estimated from a short sample is not advised. This caveat applies especially to countries that have short periods of stable data.

  • The alternative use of long samples often ignores the possibility of changes in the parameters of the rule—response coefficients or real interest rate or inflation targets. For example, one should make a distinction between the monetary regime of the Fed during Paul Volcker’s chairmanship and that during Greenspan’s chairmanship. While, in both periods, the Fed was committed to price stability, it is doubtful that inflation targets were the same.18 A former Fed Governor, Janet Yellen (Federal Reserve Board, 1995, pp. 43–4), confirms this implicitly when she says that the Taylor rule seems to be a good description of the Fed’s behavior since 1986, but not of its behavior from 1979 when Volcker was appointed chairman, to 1986.19

  • A rather important but still commonly overlooked caveat has been given by Orphanides (2001, p. 964). He finds that real-time policy recommendations differ considerably from those obtained with ex post revised data, and that estimated policy reaction functions based on such data provide misleading descriptions of historical policy and obscure the behavior suggested by information available to the Fed in real time.20

  • The illusionary effects of a stronger or weaker response to movements in certain fundamentals that arise due to their horizon misspecification are documented by Orphanides (2001). He shows that the policy reaction function, which has forward-looking behavior but includes forecasts of less than four quarters ahead, has higher estimates for the lag of the federal funds rate and for the output gap, but lower estimate for inflation, compared with the specification with forecasts of four quarters ahead.

  • Another illusionary effect, which is caused by monetary policy inertia, is documented by Rudebusch (2002). He argues that a policy rule with interest rate smoothing is difficult to distinguish from a rule with serially correlated policy shocks.21 While in the former persistent deviations from the output gap and inflation response occur because policymakers are deliberately slow to react, in the latter these deviations reflect policymakers’ response to other persistent influences. Rudebusch proposes to distinguish between the two by analyzing the interest rate term structure.

III. Simulations

The empirical part of our paper illustrates in a mathematical way, via Monte Carlo simulations, how illusions involving certain types of Taylor rules may arise. We show how easy it is to confuse a particular setting with one that is very different theoretically. Our simulations demonstrate a substantial degree of statistical illusion, yielding an impression of a monetary policy more sophisticated than it is assumed to be.

A. The Model

The model used in our simulations is described in Rudebusch and Svensson (1999). It consists of two equations – a Phillips curve, where quarterly inflation is determined by its four lags and an output gap lag; and an IS curve, where a quarterly output gap is determined by its own two lags and an annual real interest rate. Here we use the Rudebusch (2001) version of the model:

πt=0.70πt10.10πt2+0.28πt3+0.12πt4+0.14yt1+εt(2)
yt=1.16yt10.25yt20.10(i¯t1π¯t1)+ηt(3)

where π denotes the inflation rate, y denotes the output gap (deviations of output from potential), and i denotes the short-term nominal interest rate. All quarterly variables are annualized, a top bar denotes the annual variable (average of quarterly data), and σε = 1.007 and ση=0.822.

This model is completely backward-looking, implicitly assuming adaptive expectations and has become somewhat a standard tool in policy analysis (see Romer, 2001). Its use in the literature and in this paper, is motivated by the following reasons, as stated in Rudebusch (2001, p. 205); “[…] although [the model’s] simple structure facilitates the production of benchmark results, this model also appears to roughly capture the views about the dynamics of the [U.S.] economy held by some monetary policymakers.” Also, “the model can be interpreted as a restricted VAR, and appears to be stable over various sub-samples.”(p. 205) Last but not least, the model is a standard used in the literature to analyze the performance of Taylor rules, because, as Taylor (1999) notes, although it has nonrational expectations, it is empirically more accurate. The literature suggests that alternative forward-looking frameworks do not fit the observed data well unless there are some agents that are to some degree backward looking (e.g., Ball, 2000; and Roberts, 1998).

The model assumes that a policymaker can affect inflation only within two periods, as monetary policy has an effect on output gap with a one-period lag, and output gap likewise affects inflation with a one-period lag. The lagged inflation coefficients in the Phillips curve are restricted so that their sum is equal to 1; however, the results are very similar even without imposing this restriction. Finally, note that the model implies the steady state where y* = 0 and i* = π*.

To close the model, we assume that the policymaker sets the quarterly interest rate according to a Taylor rule as follows:

it=(1+Cπ)π¯t1+Cyyt1=1.5π¯t1+0.5yt1(4)

The rule is very similar to that proposed in Taylor (1993). The only difference is that the interest rate responds to the quarterly lags on the fundamentals, rather than their contemporaneous values, as the lags reflect the latest information, which a policymaker can actually observe (McCallum, 1999a). Also, the rule is consistent with the zero steady state (y* = i* = π* = 0) implied by the model.

This rule can be characterized as “naïve to the fourth degree.” First, the monetary policy setting is explained by only two fundamentals – the output gap and inflation. Second, the monetary policy setting is backward, with the short-term interest rate reacting only to the latest observed values of the fundamentals. Third, the rule assumes that a policymaker ignores possible data measurement errors. And finally, the rule is completely mechanical, with no judgmental element. Nevertheless, we use this type of rule because it makes the results very tractable. The rule also makes sense intuitively.22

To extend our analysis, we also conduct separate simulations with a Taylor rule augmented by a serially correlated policy shock:

it=(1+Cπ)π¯t1+Cyyt1+εt(5)
εt=λεt1+ζt;σζ=0.5(6)
it=1.5π¯t1+0.5yt1+0.92εt1+ζt(7)

Serially correlated policy shocks may arise for various reasons. They could reflect serially correlated disturbances beyond those captured by movements of lagged inflation and/or the output gap, such as credit crunches, oil and commodity price shocks, exchange rate movements, and stock market developments.

These shocks could also represent a measurement error, the difference between what the policymakers thought about the state of the economy in real time and what researchers know ex post. Such errors will be serially correlated if, for example, errors arise because of changes in productivity trends. In the early 1970s, productivity growth slowed down − a development that was not immediately understood and which led to differences between the actual growth of potential output and what was perceived to be such. A similar development occurred in the mid–1990s, when productivity growth increased, leading potential output to grow faster than expected. Both these events helped to cause the emergence of what ex post could look like a serially correlated policy shock (see also footnote 9).

B. Procedure

We simulate 1000 periods, and use the last 970 for estimations. Simulations are run 500 times, and each of the scenarios assumes that, for the first four periods, the economy is in the zero steady state (y* = i* = π* = 0).

We use the data obtained in order to see whether our estimations allow us to identify the policy rule as it is, or whether they document an illusionary presence of more sophisticated versions of the Taylor rule – which are forecast based, have embedded interest rate smoothing, or respond to the growth rates of the fundamentals.

For each 500 simulations, we estimate different rules and then report averages of the estimated coefficients, standard errors, adjusted R-squared, Durbin-Watson (DW) statistics, and sum of squared residuals (SSR) across the 500 scenarios.23

Rules are estimated using GMM robust to serial correlation24 and the usual suspects to instrument for these variables (lagged values of interest rates, inflation, and the output gap),25 as in Clarida, Gali, and Gertler (2000). We follow standard practices and estimate the interest rate rule as a single equation.

As mentioned above, we simulate two different cases – one where interest rates are set in the absence of any policy shocks and the other where interest rates are affected by a serially correlated policy shock. In the second case, we estimate the rules by first excluding an autoregressive component and then including it.

In each of the cases, we first estimate the true rule, i.e., a rule based on lagged values of inflation and the output gap. Then, we estimate rules with a misspecified horizon, where the interest rate depends on expected inflation and the output gap. Next, we include the lagged interest rate among the regressors, and, finally, we add variables such as output gap growth and the inflation differential.

C. Results. No Policy Shock Scenario

When evaluating the rules with correctly specified timing of the fundamentals, we correctly identify the coefficients of the rule, assigning zero values to every included irrelevant variable, such as the lagged interest rate, inflation differential, and output gap growth. We obtain an adjusted R-squared of 1.00, SSR of 0.0, and DW statistics of 2.00.

However, when the rules with incorrectly specified timing of the fundamentals are evaluated, we obtain incorrect but plausibly looking assessments. Table 2 shows results of estimating a rule with a simple functional form when the true rule is based on lags.

Table 2.

Simple Functional Form

(True Rule Based on Lagged Fundamentals)

article image

The results in Table 2 should be read as follows. Each row corresponds to horizon k of the fundamentals. The true rule has a horizon -1, since the rules used in the simulations had interest rate as a function of lagged inflation and output. When the horizon is 4, instead of estimating a rule based on lagged output and inflation, we estimate a rule based on expected fundamentals four quarters ahead.

The general form for the rule is as follows, where k is the indicated horizon in the table:

it=(1+Cπ)Etπ¯t+k+CyEtyt+k

The averages of coefficients are reported in the first row for each horizon, and the standard errors in the subsequent row. The rest of the row includes the main statistics for the specification.

All the rules produce very high adjusted R-squared, while the DW statistic sharply deteriorates as the horizons of the variables increase. Now notice that the rule reported in the last row of Table 2 is a description of a forward-looking (one-year-ahead) policy, where a policymaker eyes inflation, ignores movements in the expected output gap (the output gap coefficient is statistically insignificant), and takes into account other events (collected in a judgmental policy shock ζ). This rule, therefore, represents a sort of an inflation-targeting rule:

it=1.49(0.04)Etπ¯t+40.10(0.07)Etyt+4+ζt;R¯2=0.87(8)

We will use this example to illustrate how this “reduced-form” effect works. Note that forecast inflation and the output gap can be represented as sums of the respective leads and error terms. These leads can, in turn, be to a different degree approximated by the lags – inflation better, output gap worse. Substituting these into the rule allows an almost precise retrieving of the true rule:

π¯t+4=0.96π¯t1+0.33yt1+ξ1;R¯2=0.89yt+4=0.17π¯t1+0.46yt1+ξ2;R¯2=0.31it=1.49Etπ¯t+40.10Etyt+4+ξt =1.49π¯t+40.10yt+4+(ζt+1.49(Etπ¯t+4π¯t+4)+0.10(Etyt+4yt+4)) =1.45πt1+0.45yt1+ζ˜t

where the last term represents the sum of the “original” policy error ζ, forecast errors (i.e., the difference between the leads and expectations), and residuals ζ from approximating the leads of the inflation and the output gap by their lags.26

Next, using the same data we generated earlier, we estimate a very popular version of the rule, where the policymaker smoothes the path for the interest rate, having in mind the interest rate target as prescribed by the original Taylor rule, but constraining him/herself by avoiding big jumps in the value of the instrument. The estimated rule therefore has the following form:

it=Ciit1+(1Ci)((1+Cπ)Etπ¯t+k+CyEtyt+k)

Again, when the timing of the fundamentals is specified correctly, the rule is identified precisely, as it should be. However, once the forecasts have been used, there is an illusion of significant smoothing. Once more, the value of the inflation coefficient looks very reasonable, while the output gap coefficient declines, albeit staying positive. Thus, a not very careful researcher might claim the following policy setting, as in the last row of Table 3:

it=0.53(0.04)it1+0.47(1.64(0.05)Etπ¯t+4+0.34(0.12)Etyt+4)+ζt;R¯2=0.96(9)
Table 3.

Smoothing Functional Form

(True Rule Based on Lagged Fundamentals)

article image

Such a setting fits well with our understanding of the monetary policy—it is forward looking and sufficiently active in responding to inflation and the output gap, moves the instrument variable cautiously, and has a judgmental element in it. A good fit would allow the production of some nice charts and some reasonable historical evidence.

Similar to the “inflation-targeting” rule (8), we illustrate the mechanics of the illusion by replacing the forecasts by approximations of the leads of inflation and the output gap by their lags, after which we again almost arrive at the true specification:

π¯t+4=1.72π¯t1+0.51yt10.51it1+ξ1;R¯2=0.90yt+4=0.61π¯t1+0.65yt10.52it1+ξ2;R¯2=0.33it+4=0.53it1+0.47(1.64Etπ¯t+4+0.34Etyt+4)+ζt=0.05it1+1.42π¯t1+0.50yt1+ζt.

In the literature, the coefficient on the lagged interest rates is usually estimated to be large, about 0.7–0.8 (see Rudebusch, 2002), and quite a few papers have been written trying to explain this over cautiousness on the part of central bankers. Our results suggest that such carefulness could be just a statistical illusion, caused by the misspecification of the rule, in particular by the incorrect specification of the timing of the fundamentals.

In addition to the rules in Tables 2 and 3, we could suggest others that fit our simulated data just as well. Similar to Table 1, we compile a list of rules that describe the same data but have a very different interpretation of the monetary policy conduct. The first three rules below come from line 1 of Table 2 and the last lines of Table 2 and Table 3. The rest of the rules come from additional estimations.

it=1.50πt1+0.50yt1;R¯2=1.00it=1.49(0.04)Etπt+40.10(0.07)Etyt+4+ζt;R¯2=0.87it=0.53(0.04)it1+0.47(1.64(0.05)Etπt+4+0.34(0.12)Etyt+4)+ζt;R¯2=0.96it=1.47(0.01)πt+0.37(0.01)yt0.50(0.06)(ytyt1)+ζt;R¯2=0.99it=1.52(0.00)πt+0.54(0.01)yt1.50(0.60)(EtπtEtπt1)+ζt;R¯2=0.99it=1.57(0.02)Etπt+2+0.43(0.03)Etyt+22.14(0.13)(Etπt+2Etπt+1)+ζt;R¯2=0.94.

The three new rules also produce a good fit and have inflation coefficients close to the true values (within the 1.47–1.57 range), but assume that policymakers follow closely not only the values, but also the growth of fundamentals (expected or contemporaneous).

Most of these rules report very low DW statistics. This signals positive serial correlation in the error terms, and, thus, a thorough econometric analysis would likely reject most of them. Nevertheless, it seems that many researchers in the field tend to report results consistent with their a priori beliefs instead of giving these results the benefit of the doubt.

D. Results: Serially Correlated Policy Shock Scenario

What we documented in the previous subsection was a reduced-form effect similar to the one Orphanides (2001) discusses. The true rule has a lagged interest rate and lagged inflation in it. The illusion arises because these lagged fundamentals, together with the lagged interest rate, can closely approximate expected fundamentals.

In this subsection, we describe a different – “high-persistence” – mechanism, which leads to similar illusionary effects. We use data obtained by simulating the model with the Taylor rule that includes an autoregressive policy shock (see page 14).

First, we estimate the rules without including an autoregressive (AR) component and present the results in Table 4. They are very similar to those reported in Table 2. Misspecifying the horizon of the fundamentals still yields reasonable results, both in terms of fit and coefficients, though not in terms of DW statistics.

Table 4.

Simple Functional Form, Omitted AR1 Component

(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)

article image

The results in Table 5, however, demonstrate that one who does not realize that the policymaker is answering to serially correlated disturbances and incorrectly estimates the rule based only on lagged inflation, the output gap, and the interest rate, will find evidence of interest rate smoothing. The illusion of smoothing arises already at the k = -1 horizon, driven by a serially correlated policy shock, as indicated by low DW statistics.

Table 5.

Smoothing Functional Form, Omitted AR1 Component

(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)

article image

Once the horizon of the fundamentals has been misspecified, the coefficient on the lagged interest rate becomes even larger compared with both the lagged specification of the rule and the results reported in Table 3; this happens because both reduced-form and high-persistence effects are now in place.

The last specification is worth flagging as it is very similar to that reported in the literature, with a large coefficient on the lagged interest rate (as mentioned above, the suggested range is 0.7–0.8; see Rudebusch, 2002), coefficients on the inflation and the output gap similar to those originally suggested by Taylor, and a very good fit:

it=0.67(0.06)it1+0.33(1.67(0.06)Etπ¯t+4+0.48(0.16)Etyt+4)+ζt(10)

Our simulation exercise suggests that such results may be illusionary, as explained by the misspecification of the horizon of the fundamentals included in the rule and by the omission of the autoregressive policy shock, which could reflect a serially correlated measurement error.27

Inclusion of the AR(1) component allows a precise identification of the rule, as long as the horizon is specified correctly (see Table 6). Misspecification of the horizon significantly reduces coefficients, though adjusted R-squared and DW statistics remain high.

Table 6.

Simple Functional Form, Included AR1 Component

(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)

article image

Inclusion of both the AR(1) component and the lagged interest rate produces results without any clear patterns and with rather large dispersions across simulated scenarios. The explanation would be a correlation between the implied interest rate differential and the AR(1) component. The only exception is, again, the case with a correctly specified horizon, which allows for exact identification of the rule.

To illustrate complications arising when one tries to distinguish between serial correlation of the policy error and interest rate smoothing, we conduct a test similar to that described in Rudebusch (2002). We estimate a nested equation that allows for testing two hypotheses – one a hypothesis of a serially correlated (SC) policy shock and the other of interest rate smoothing (SM):

it=ρ1it1+(1+Cπ)π¯t+k+Cyyt+kρ2((1+Cπ)π¯t+k1+Cyyt+k1)+ωtH1SC:ρ1=ρ2λH1SM:ρ2=0;ρ1>0.(11)

If the rule includes a smoothing component but not the serially correlated policy shock, then the estimation should yield a zero coefficient ρ2 in place of the “nest” variable. If the rule includes a serially correlated policy shock but not a smoothing component, then the nest variable is equal to the lagged interest rate net of the lagged policy shock. Hence, ρ1 and ρ2 should be equal (and equal to the autoregressive parameter of the policy shock) in order to cancel the effect of the lagged interest rate.

The idea is to run this test at different horizons and see whether misspecification of the horizon leads to different conclusions. The results are presented in Table 7, which, in addition, reports P-values for the two hypotheses. They are similar to those discussed above. When the horizon is specified correctly (k = -1), the hypothesis H1SM is rejected, but the hypothesis H1SC is not, as should be the case. At the zero horizon, both hypotheses are rejected, while at their higher horizons, neither are.

Table 7.

Testing Smoothing Versus Serially Correlated Policy Shock

(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)

article image

This, again, shows that, with a misspecified horizon, the two patterns cannot be statistically distinguished. In particular, for forward-looking specifications, neither of the hypotheses – serial correlation or smoothing behavior – is rejected.28

IV. Conclusion

The vast literature on simple monetary policy rules still has to settle on empirical benchmark rules that can be used for policy recommendations, and on simple theoretical benchmark rules robust to model and output gap misspecifications. It is particularly important to take stock by surveying how Taylor rules are estimated and used for policy recommendations in the current literature (in closed economies, with a focus on empirical U.S. models), and by showing the problems involved in estimating these rules.

We start by documenting potential abuses of Taylor rules. This happens when these rules (and, in general, simple policy rules) are used as a basis for projections, or as a guide for policymakers, when they are misspecified, incorrectly estimated, or not optimal. We provide a list of factors found in the literature that could contribute to such mistakes, ranging from straightforward empirical difficulties, such as short samples or serial correlation of the variables, to problems of a theoretical nature, such as the emergence of multiple equilibria under Taylor rules or their nonoptimality.

It is even more important to document a particular estimation problem – a misleading impression of certain types of behavior – by simulating a simple model paired with a lag-based Taylor rule and using the data obtained to estimate the monetary policy rule. The estimation results indicate several other types of rules, suggested in the literature, including forward-looking monetary policy and/or a rule with interest rate smoothing. These results demonstrate that there may be a high degree of statistical illusion, through which an impression of a more sophisticated monetary policy than it actually is, is found.

Our results are consistent with Orphanides (2001), who demonstrates that changing the horizon of the fundamentals in the rule changes the estimated coefficients; they are also consistent with Rudebusch (2002), who argues that a serially correlated policy shock may generate an illusion of interest rate smoothing.

The results of this paper and the evidence from the literature suggest that there could be “too big” conclusions drawn based on “too little” evidence. That could lead to some negative consequences.

The most straightforward example is a situation where a policymaker is incorrectly advised to smooth the nominal interest rate path, and therefore not to react to new developments too strongly. Faced with an inflationary shock, such a policymaker will end up accommodating inflation, thereby unnecessarily destabilizing the economy.

Another example is a possible miscommunication between a central bank and the private sector. If the latter builds its expectations on a particular version of a Taylor rule that is erroneously believed to be the central bank’s policy, it may misjudge the future path of interest rates and make wrong investment decisions.

The least obvious example is a misallocation of central bank resources in developing economies. Implementation of an expectation-based Taylor rule with interest rate smoothing and no judgment implies a rather different strategy for a central bank striving to build its capacity than implementation of a lag-based rule with no smoothing and a substantial amount of judgment.

Although we limit the survey and the model simulations to the United States, the conclusions of this paper apply widely, since these types of misestimations are not unique to U.S. data, but are the best documented for U.S. data. In other countries, especially in emerging market and developing countries, it is even harder to specify correctly simple policy rules, due to major structural breaks, implementation of “stop-and-go” policies, and lack of consistent data. Moreover, since in these countries the exchange rate channel is one of the strongest mechanism of monetary policy transmission, the rule becomes more complicated and hence, the potential for misestimation is even higher.

The goal of this paper is not to criticize Taylor rules, but their careless use. Taylor rules are very useful to summarize the monetary policy in rather simple terms. This simplicity however, limits the scope of policy advice one can make based on these rules. In particular, Taylor rules are not likely to be extremely useful in dramatic circumstances, when for example, asset bubbles burst, exchange rate volatility rises, or capital flows get reversed.

As Svensson (2003) points out, there is a substantial gap between the research on policy rules and the actual policymaking process. Policymakers (as well as many researchers) believe that simple rules should not be followed mechanically (Taylor, 1993 and 1999), but rather used as guidelines. Fed Chairman Alan Greenspan emphasized this point in several of his speeches, in particular, at the American Economic Association meetings in January 2004, noting that “prescriptions of formal rules can, in fact, serve as helpful adjuncts to policy, as many of the proponents of these rules have suggested. But at crucial points, like those in our recent policy history … simple rules will be inadequate as either descriptions or prescriptions for policy.” Taylor (1999, p. 14) himself states that “when I proposed a specific simple policy rule in 1992, I suggested that the rule be used as a guideline along with several other policy rules.”

Guidelines would imply non-trivial deviations of actual policy from the Taylor rule prescriptions in unusual circumstances (for example, stock market crashes and political developments). Confusingly enough, many academic papers provide seemingly convincing evidence of policymakers closely and successfully following sophisticated policy rules instead of using them as mere guidelines. In this paper, we argue that such examples may be merely a statistical illusion, thus supporting the case for Taylor rules being more of a guideline rather than an explicit framework. In particular, we claim that these sophisticated versions of a Taylor rule documented in the literature could be a statistical misrepresentation driven by incorrect specification of the horizon of the fundamentals and by omission of the serially correlated policy shocks or measurement errors.

This type of problem is not only pertinent to monetary economics. For microeconomics and industrial organization, Klemperer notes in his 2003 (abstract) paper called “Using and abusing economic theory,” that the “economic theory is often abused in practical policy-making. There is frequently excessive focus on sophisticated theory at the expense of elementary theory; too much economic knowledge can sometimes be a dangerous thing.”

As Blinder (1997, p. 17) notes, “central banking in practice is as much art as science.” He points out that science helps in practicing this art. In a field as essential as monetary economics, it is also important to ensure that science does not drift too far away from the art.

APPENDIX I

Description of the Taylor Rules Specified in Table 1

1. Taylor (1993)
it=1.00+1.50πt+0.50yt

The 1987:Q1–1992:Q3 sample is covered. Output gap is constructed as deviations of log output from its linear trend. To estimate the trend, the 1984:Q1–1993:Q2 sample is used. Inflation is measured as the annual growth rate of the GDP deflator. This rule is hypothetical, not estimated.

2. Clarida, Gali and Gertler (2000). Page 157, Table II
it=0.79it1+0.21(r*4.12+2.15Etπt+1+0.93Etyt+1)

The 1979:Q3–1996:Q4 sample is covered. Output gap is based on the estimates of the Congressional Budget Office (CBO). The paper also used estimates constructed as deviations of log output from its quadratic trend. Inflation is measured using the annualized rate of change of the GDP deflator between two subsequent quarters. The rule is estimated using GMM.

3. Orphanides (2001). Page 980, Table 6
it=0.66it1+0.34(1.80+1.64Etπt+4+0.97Etyt+4)

The 1987:Q1–1993:Q4 sample is covered. Real-time data are used − actual Fed forecasts for inflation (GDP deflator) and output gap calculated at the moment of decision taking (from the so-called Greenbook). The rule is estimated with OLS.

4. Ball and Tchaidze (2002). Page 111, Table II
it=1.47+1.54πt1.67(utut*).

The 1987:Q4–1995:Q4 sample is covered. Inflation is measured using annual growth of the GDP deflator. Time series for NAIRU are calculated based on real time estimations available in the economic literature. The rule is estimated using OLS.

5. Orphanides and Williams (2003). Page 46, Table 5 (Generalized rule optimized for s=0)
it=0.72it1+0.28(r*+1.26πt1.83(utu*)2.39(utut1))

This rule is optimized in a forward-looking model estimated for the U.S. economy, for the 1969–2002 period. The model assumes that policymakers have perfect information about the natural rate of interest and unemployment.

This rule is different from the others, in that the first four in fact attempt to describe the actual policy. However, we decided to include it because of its interesting functional form. Moreover, since many observers (used to) believe that actual Fed behavior was optimal, these rules could be somewhat comparable.

References

  • Auray, Stephane, and Patrick Feve, 2003, “Money Growth and Interest Rate Rules: Is There an Observational Equivalence?IDEI Working Paper (Toulouse France: Institut d’Economie Industrielle).

    • Search Google Scholar
    • Export Citation
  • Auray, Stephane, and Patrick Feve, 2004, “Do Models with Exogenous Money Supply Produce a Taylor Rule Like Behavior?IDEI Working Paper (Toulouse France: Institut d’Economie Industrielle).

    • Search Google Scholar
    • Export Citation
  • Baba, Yoshihisa, David F. Hendry, and Ross M. Starr, 1992, “The Demand for M1 in the USA, 1960–1988,” Review of Economic Studies, Vol. 59 (January), pp. 2561.

    • Search Google Scholar
    • Export Citation
  • Ball, Laurence, 2000, “Near-Rationality and Inflation in Two Monetary Regimes,” NBER Working Paper No. 7988 (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Ball, Laurence, and Robert Tchaidze, 2002, “The Fed and the New Economy,” American Economic Review, Papers and Proceedings, Vol. 92 (May), pp. 10814.

    • Search Google Scholar
    • Export Citation
  • Benhabib, Jess, Stephanie Schmitt-Grohe, and Martin Uribe, 2001, “Monetary Policy and Multiple Equilibria,” American Economic Review, Vol. 91 (March), pp. 16786.

    • Search Google Scholar
    • Export Citation
  • Blinder, Alan S., 1986, “More on the Speed of Adjustment in Inventory Models,” Journal of Money, Credit, and Banking, Vol. 18 (August), pp. 35565.

    • Search Google Scholar
    • Export Citation
  • Blinder, Alan S., 1997, “What Central Bankers Could Learn from Academics and—Vice Versa,” Journal of Economic Perspectives, Vol. 11, No. 2, pp. 319.

    • Search Google Scholar
    • Export Citation
  • Bryant, Ralph C., Peter Hooper, and Catherine L. Mann, eds., 1993, Evaluating Policy Regime: New Research in Empirical Economics (Washington: Brookings Institution).

    • Search Google Scholar
    • Export Citation
  • Carlstrom, Charles T., and Timothy S. Fuerst, 2001, “Timing and Real Indeterminacy in Monetary Models,” Journal of Monetary Economics, Vol. 47, No. 2, pp. 28598.

    • Search Google Scholar
    • Export Citation
  • Chadha, Jagit S., Lucio Sarno, and Giorgio Valente, 2004, “Monetary Policy Rules, Asset Prices and Exchange Rates,” Staff Papers, International Monetary Fund, Vol. 51, No. 3, pp. 52952.

    • Search Google Scholar
    • Export Citation
  • Clarida, Richard, Jordi Galí, and Mark Gertler, 1998, “Monetary Policy Rules in Practice: Some International Evidence,” European Economic Review, Vol. 42 (June), pp. 1033-67.

    • Search Google Scholar
    • Export Citation
  • Clarida, Richard, Jordi Galí, and Mark Gertler, 1999, “The Science of Monetary Policy: A New Keynesian Perspective,” Journal of Economic Literature, Vol. 37, No. 4, pp. 1661707.

    • Search Google Scholar
    • Export Citation
  • Clarida, Richard, Jordi Galí, and Mark Gertler, 2000, “Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory,” Quarterly Journal of Economics, Vol. 115 (February), pp. 14780.

    • Search Google Scholar
    • Export Citation
  • English, William B., William R. Nelson, and Brian P. Sack, 2003, “Interpreting the Significance of the Lagged Interest Rate in Estimated Monetary Policy Rules,” Contributions to Macroeconomics, Vol. 3, No. 1, Article 5, Berkeley Electronic Press.

    • Search Google Scholar
    • Export Citation
  • Federal Reserve Board, 1995, Federal Open Market Committee Transcripts, FOMC Meeting, Jan. 31–Feb. 1. Available via Internet: http://www.federalreserve.gov/fomc/transcripts

  • Gerlach-Kristen, Petra, 2004, “Interest-Rate Smoothing: Monetary Policy Inertia or Unobserved Variables?Contributions to Macroeconomics, Vol. 4, No. 1, Article 3, (Berkeley Electronic Press).

    • Search Google Scholar
    • Export Citation
  • Giannoni, Marc, and Michael Woodford, 2003a, “Optimal Interest-Rate Rules: I. General Theory,” NBER Working Paper No. 9419, (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Giannoni, Marc, and Michael Woodford, 2003b, “Optimal Interest-Rate Rules: II. Applications,” NBER Working Paper No. 9420, (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Greenspan, Alan, 2004, “Risk and Uncertainty in Monetary Policy,” American Economic Review, Vol. 94, No. 2, pp.3340.

  • Grilliches, Zvi, 1967, “Distributed Lags: A Survey,” Econometrica, Vol. 35 (January), pp. 1649.

  • Isard, Peter, Douglas Laxton, and Ann-Charlotte Eliasson, 1999, “Simple Monetary Policy Rules Under Model Uncertainty,” International Tax and Public Finance, Vol. 6 (November), pp. 53777.

    • Search Google Scholar
    • Export Citation
  • Jondeau, Eric, Herve Le Bihan, and Clementine Galles, 2004, “Assessing Generalized Method of Moments Estimates of the Federal Reserve Reaction Function,” Journal of Business and Economic Statistics Vol.22, No. 2, pp. 22539.

    • Search Google Scholar
    • Export Citation
  • Judd, John P., and Glenn D. Rudebusch, 1998, “Taylor’s Rule and the Fed: 1970–1997,” Economic Review, Federal Reserve Bank of San Francisco No. 3, pp. 316.

    • Search Google Scholar
    • Export Citation
  • King, Robert G., and Andre Kurmann, 2002Expectations and the Term Structure Of Interest Rates: Evidence and Implications,” Economic Quarterly, Federal Reserve Bank of Richmond, Vol. 88, No. 4, pp. 4995.

    • Search Google Scholar
    • Export Citation
  • Klemperer, Paul, 2003, “Using and Abusing Economic Theory,” CEPR Discussion Paper No. 3813 (London: Center for Economic Policy Research).

    • Search Google Scholar
    • Export Citation
  • Kozicki, Sharon, 1999, “How Useful Are Taylor Rules for Monetary Policy?,” Economic Review, Federal Reserve Bank of Kansas City, Vol. 84, No.2, pp. 533.

    • Search Google Scholar
    • Export Citation
  • Lansing, Kevin, 2002, “Real-Time Estimation of Trend Output and Illusion of Interest Rate Smoothing,” Economic Review, Federal Reserve Bank of San Francisco, pp. 17-34.

    • Search Google Scholar
    • Export Citation
  • McCallum, Bennett, 1999a, “Issues in the Design of Monetary Policy Rules,” in Handbook of Macroeconomics, Vol. 1C, ed. by John Taylor and Michael Woodford (Amsterdam: Elsevier Science, North-Holland).

    • Search Google Scholar
    • Export Citation
  • McCallum, Bennett, 1999b, “Recent Developments in the Analysis of Monetary Policy Rules,” Federal Reserve Bank of Saint Louis Review, Vol. 81 (November/December), pp. 311.

    • Search Google Scholar
    • Export Citation
  • Mehra, Yash P., 1993, “The Stability of M2 Demand Function: Evidence from an Error-Correction Model,” Journal of Money, Credit and Banking, Vol. 25 (August), pp. 45560.

    • Search Google Scholar
    • Export Citation
  • Mehra, Yash P., 1999, “A Forward-Looking Monetary Policy Reaction Function,” Economic Quarterly, Federal Reserve Bank of Richmond, Vol. 85, No. 2, pp. 3353.

    • Search Google Scholar
    • Export Citation
  • Minford, Patrick, Francesco Perugini, and Naveen Srinivasan, 2002, “Are Interest Rate Regressions Evidence for a Taylor Rule?,” Economics Letters, Vol. 76 (June), pp. 14550.

    • Search Google Scholar
    • Export Citation
  • Okun, A.M., 1962, “Potential GNP: Its Measurement and Significance,” in American Statistical Association 1962 Proceedings of the Business and Economic Section (Washington: American Statistical Association).

    • Search Google Scholar
    • Export Citation
  • Onatski, Alexei, and James H. Stock, 2002, “Robust Monetary Policy Under Model Uncertainty in a Small Model of the U.S. Economy,” Macroeconomic Dynamics, Vol. 6, No. 1, pp. 85110.

    • Search Google Scholar
    • Export Citation
  • Orphanides, Athanasios, 2001, “Monetary Policy Rules Based on Real-Time Data,” American Economic Review, Vol. 91, No. 4, pp. 96485.

    • Search Google Scholar
    • Export Citation
  • Orphanides, Athanasios, 2003a, “The Quest For Prosperity Without Inflation,” Journal of Monetary Economics, Vol. 50, No. 3, pp. 63363.

    • Search Google Scholar
    • Export Citation
  • Orphanides, Athanasios, 2003b, “Historical Monetary Policy Analysis and the Taylor Rule,” Journal of Monetary Economics, Volume 50, No. 5, pp. 9831022.

    • Search Google Scholar
    • Export Citation
  • Orphanides, Athanasios, and John C. Williams, 2003, “Robust Monetary Policy Rules with Unknown Natural Rates,” Finance and Economics Discussion Paper Series No. 2003–11 (Washington: Board of Governors of the Federal Reserve System).

    • Search Google Scholar
    • Export Citation
  • Roberts, John M., 1998, “Inflation Expectations and the Transmission of Monetary Policy,” Finance and Economics Discussion Papers No. 1998–43 (Board of Governors of the Federal Reserve System).

    • Search Google Scholar
    • Export Citation
  • Rogoff, Kenneth, 2003, “Globalization and Global Disinflation,” Economic Review, Federal Reserve Bank of Kansas City, Vol. 88, No. 4, pp. 4579.

    • Search Google Scholar
    • Export Citation
  • Romer, David, 2001, Advanced Macroeconomics (Boston: McGraw Hill/Irwin, 2 nd ed.).

  • Rudebusch, Glenn D., 2001, “Is the Fed Too Timid? Monetary Policy in an Uncertain World,” Review of Economics and Statistics, Vol. 83 (May), pp. 20317.

    • Search Google Scholar
    • Export Citation
  • Rudebusch, Glenn D., 2002, “Term Structure Evidence on Interest Rate Smoothing and Monetary Policy Inertia,” Journal of Monetary Economics, Vol. 49, No. 6, pp. 116187.

    • Search Google Scholar
    • Export Citation
  • Rudebusch, Glenn D., and Lars E. O. Svensson, 1999, “Policy Rule for Inflation Targeting,” in Monetary Policy Rules, ed. by John B. Taylor (Chicago: University of Chicago Press).

    • Search Google Scholar
    • Export Citation
  • Stock, James H., and Mark W. Watson, 2003, “Has the Business Cycle Changed? Evidence and Explanations,” in Monetary Policy and Uncertainty: Adapting to a Changing Economy, (Kansas City, Missouri: Federal Reserve Bank of Kansas City).

    • Search Google Scholar
    • Export Citation
  • Svensson, Lars E.O., 2003, “What is Wrong with Taylor Rules? Using Judgment in Monetary Policy through Targeting Rules,” Journal of Economic Literature, Vol. 41 (June), pp. 42677.

    • Search Google Scholar
    • Export Citation
  • Svensson, Lars E.O., and Michael Woodford, 2004, “Implementing Optimal Policy through Inflation-Forecast Targeting,” CEPR Discussion Paper No. 4229, (London: Center for Economic Policy Research).

    • Search Google Scholar
    • Export Citation
  • Swanson, Eric T., 2004, “Signal Extraction and Non-Certainty-Equivalence in Optimal Monetary Policy Rules,” Macroeconomic Dynamics, Vol. 8 (February), pp. 2750.

    • Search Google Scholar
    • Export Citation
  • Taylor, John B., 1993, “Discretion Versus Policy Rules in Practice,” Carnegie-Rochester Conference Series on Public Policy, Vol. 39 (December), pp. 195214.

    • Search Google Scholar
    • Export Citation
  • Taylor, John B., 1999, Monetary Policy Rules, ed. by John Taylor (Chicago: University of Chicago Press).

  • Tchaidze, Robert, 2004, “The Greenbook and U.S. Monetary Policy,” IMF Working Paper 04/213 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Trehan, Bharat, and Tao Wu, 2004, “Time Varying Equilibrium Real Rates and Monetary Policy Analysis,” Federal Reserve Bank of San Francisco Working Paper No. 2004-10.

    • Search Google Scholar
    • Export Citation
  • Walsh, Carl E., 2004, “Robustly Optimal Explicit Instrument Rules and Robust Control: An Equivalence Result,” Journal of Money, Credit and Banking, Vol. 36 (December), pp. 110513.

    • Search Google Scholar
    • Export Citation
  • Woodford, Michael, 2001, “The Taylor Rule and Optimal Monetary Policy,” American Economic Review, Paper and Proceedings, Vol. 91 (May), pp. 23237.

    • Search Google Scholar
    • Export Citation
1

We are grateful to Nils Bjorksten, Oya Celasun, Robert Flood, Jim Morsink, Glenn Rudebusch, Silvia Sgherri, and the participants of the various IMF seminar series as well as SMYE 2004, LAMES 2004 and LACEA 2004 conferences for their comments and suggestions. We thank Philippe Karam and Gauti Eggertsson for technical discussions. We are especially indebted to Keith Küster, Lucio Sarno, and Niamh Sheridan for extensive comments. All remaining errors are our own.

2

A search in the EconLit database for the keyword “monetary policy rules” for 2000–03 returns 361 published articles, or an average of 90 a year.

3

The Economist uses Taylor rule prescriptions when describing the stance of monetary policy in the United States, United Kingdom, and euro area. Monetary Trends, published by the St. Louis Federal Reserve Bank, regularly reports on Taylor rule components.

4

Taylor (1993) identified potential output empirically with a linear trend, while other papers use quadratic, Hodrick-Prescott trends, or other more sophisticated techniques.

5

For estimations of monetary policy rules with asset prices and exchange rates in industrial countries, see Chadha, Sarno, and Valente (2004).

6

The central bank expectations considered are either formed within a model, as in Clarida, Gali, and Gertler (2000), or actual estimates of the central bank in real time, as done by Orphanides (2001). Mehra (1999) has estimated short-term interest rate as a function of inflation expectations contained in bond rates.

7

Lag-based rules are not necessarily backward-looking, since lags serve as indicators of future values (see Tchaidze, 2004).

8

Reasons mentioned in the literature include model uncertainty, fear of disrupting capital markets, loss of credibility from a sudden large policy reversal, the need for consensus building for a policy change, and the exploitation by the central bank of the dependency of demand on expected future interest rates, signaling the central bank’s intentions toward the general public.

9

The rules with the unemployment gap appear more attractive as the natural rate of unemployment seemed easier to measure. During the mid–1990s, it was a common belief that NAIRU was 6 percent flat. The productivity growth of the late 1990s and arrival of the so-called New Economy have begun, only with a substantial delay, to challenge this belief (see Ball and Tchaidze, 2002).

10

One could derive versions of the Taylor rule as a solution to an optimization problem, where policymakers are minimizing a loss function expressed in terms of the weighted average of inflation and output gap variances (see for example, Woodford, 2001).

11

In terms of stabilizing inflation around an inflation target without causing unnecessary output gap variability.

12

The two questions commonly get mixed, though they are somewhat independent from each other. Properly formulated, the second question would sound as follows: given the way the monetary authorities are operating, what is the consequential response of the interest rate to movements in inflation and the output gap?

13

Svensson (2003, p. 429): “Monetary policy by the world’s more advanced central banks these days is at least as optimizing and forward-looking as the behavior of the most rational private agents. I find it strange that a large part of the literature on monetary policy still prefers to represent central bank behavior with the help of mechanical instrument rules.”

14

As the definition of the rule (see equation (1) on page 5) shows, one may view the Taylor rule as a more sophisticated version of an equilibrium relationship among the three variables (also known as a Fisher equation, i = r + π).

15

Minford, Perugini and Srinivasan (2002) and Auray and Feve (2003 and 2004) demonstrate that money supply rules may be observationally equivalent to Taylor rules.

16

King and Kurmann (2002) analyze the term structure of the U.S. interest rates, and Baba, Hendry, and Starr (1992) analyze the U.S. money demand. Both papers find that U.S. interest rates are stationary in first differences, and therefore nonstationary in levels, I (1) series. However, Mehra (1993) finds in money demand studies that U.S. interest rates are stationary series, and Clarida, Gali, and Gertler (2000, p. 154) note that they treat interest rates as stationary series, “an assumption that we view as reasonable for the postwar U.S., even though the null of a unit root in either variables is often hard to reject.”

17

Two-step, iterative, and continuously updating.

18

In fact, one may wonder whether the Fed had a constant inflation target during Volcker’s chairmanship (see Tchaidze, 2004).

19

“It seems to me that a reaction function in which the real funds rate changes by roughly equal amounts in response to deviations of inflation from a target of 2 percent and to deviations of actual from potential output describes tolerably well what this Committee has done since 1986.”(Federal Reserve Board, 1995 p. 43–4)

20

Orphanides (2003a) shows, contrary to other researchers who claim that U.S. monetary policy in the 1970s was “bad,” leading to high inflation, that policy was “good” but based on “bad,” misleading data.

21

Such a phenomenon has been documented in the literature before. See Grilliches (1967) and Blinder (1986).

22

With a forward-looking rule, it becomes rather difficult to disentangle different effects and to understand the misestimation problems. Therefore, we build up from the simplest rule.

23

These are the most commonly reported statistics in this field.

24

We use a two-stage least squares procedure with GMM standard errors (Bartlett kernel, fixed bandwidth, and no prewhitening). We have chosen this technique as it ensures fast convergence. However, we found that conclusions are the same when using alternative estimators, such as heteroschedasticity and the autocorrelation-consistent GMM (quadratic kernel, variable Newey West bandwidth, and no prewhitening). The standard deviations of the estimated parameters across 500 simulated scenarios double, but still remain very small.

25

Four lags of instruments were used, as commonly used in the literature (Rudebusch, 2002; and Clarida, Gali, and Gertler, 2000) and as indicated by the over identifying restrictions test.

26

The residual term does not disappear since its subcomponents emerge at different stages.

27

Lansing (2002) also suggests that real-time measurement errors could explain illusion of the presence of the lagged federal funds in estimated policy rule. Trehan and Wu (2004) find that estimated policy rules will tend to exaggerate the degree of interest rate smoothing if the monetary authorities target a time varying equilibrium real interest rate.

28

The debate on the presence of a lagged interest rate in the estimated policy rules is not settled yet. Rudebusch (2002) argues in favor of a rule with a serially correlated policy shock, as opposed to a specification with a partial interest rate adjustment. English, Nelson and Sack (2003), using a different nested specification, report evidence of both types of behavior. Gerlach-Kristen (2004, p.1) also finds that “both seem to matter, but that policy inertia appears to be less important than suggested by the existing literature.”