Journal Issue
Share
Article

The Use and Abuse of Taylor Rules

Author(s):
Robert Tchaidze, and Alina Carare
Published Date:
July 2005
Share
  • ShareShare
Show Summary Details

I. Introduction

In recent years, evaluation of simple policy rules has become one of the most common exercises in the economic literature, especially since the publication of John Taylor’s paper in 1993.2 Taylor demonstrated that a simple reaction function, later known as the Taylor rule, with a policy instrument (a short-term interest rate) responding to movements in fundamental variables (inflation and output gap), follows closely the observed path of the U.S. federal funds rate in the late 1980s and early 1990s.

This vast, especially for the United States, literature on simple policy rules has yet to settle on the empirical benchmark rules that could be used for policy recommendations. As of now, different authors have found different monetary policy rules that all fit the data well but correspond to different interpretations of the same underlying monetary policy decision-making process, as illustrated in Table 1 and detailed in the Appendix.

Table 1.Short List of Proposed Rules for the U.S. Data
PaperRule
1. Taylor (1993)it = 1.00+1.50πt+0.50yt
2. Clarida, Gali, and Gertler (2000)it =0.79it−1+0.21(r*−4.12+2.15Etπt+1+0.93+Et y t+1)
3. Orphanides (2001)it=0.66it−1+ 0.34(1.80+1.64Etπt+4+0.97+Etyt+4)
4. Ball and Tchaidze (2002)it=1.47+1.54πt-1.67(ut-ut*)
5. Orphanides and Williams (2003)it =0.72it−1+0.28(r*+ 1.26πt1.83(utu*)−2.39(utut−1))

These rules describe the same country (the United States) and the same period (late 1980s and 1990s) and are supported by good data fit, figures, and quotes from speeches made by policymakers that explain the U.S. monetary policy setting as suggested by these rules. Nevertheless, they are very different from each other − in terms of the functional form (use of output or unemployment gaps, inclusion of lagged interest rate and/or unemployment growth), timing of the fundamentals (contemporaneous or expected), and the values of the response coefficients.

We believe that this happens because current methods do not allow researchers to distinguish properly between these proposed rules and, hence, may lead them to wrong conclusions about the nature of the monetary policy.

The contribution of this paper to the literature is twofold. First, we survey the literature, with a focus not only on how these rules are evaluated and used for policy recommendations, but more important, on how these rules are abused. Second, using Monte Carlo simulations, we demonstrate how estimating the same data produces different policy rules, as in Table 1.

By “abuses of policy rules” we mean giving policy advice or making projections based on benchmark rules either selected for wrong reasons or incorrectly estimated. We provide a list of theoretical and empirical problems documented in the literature, emphasizing practical difficulties one would experience in using Taylor rules. In this regard, our paper could be thought of as a counterpart to Svensson’s 2003 paper expressively titled “What Is Wrong with Taylor Rules?”

In order to demonstrate how estimating the same data produces different policy rules, we simulate inflation, the output gap, and short-term interest rates series, using a simple macroeconomic model where short-term interest rates depend only on lagged values of the output gap and inflation. The data obtained are used to estimate alternative policy rules commonly found in the literature. We do find a forward-looking rule, i.e., one driven by expectations of the fundamentals, rather than by their lags; as well as a rule with interest rate smoothing, i.e., a rule that includes its lagged values among the fundamentals. We also find rules that respond to other variables, such as output gap growth or inflation differential.

Evaluating monetary policy reaction functions is very much like searching for a black cat in a dark room, not knowing whether the cat is even there for sure. In our Monte Carlo simulation exercise we look for a black cat in a brightly lit room, knowing that it is there, and yet we could not distinguish it from a dog or an elephant. But our findings are consistent with Orphanides (2001), who demonstrates that changing the horizon of the fundamentals in the rule may lead to changes in the estimated coefficients, as well as Rudebusch (2002), who argues that presence of a serially correlated policy shock in the policy rule is observationally equivalent to a rule with no policy shock but interest rate smoothing.

Our results have significant implications, as estimated rules often form a basis for policy advice. A policymaker incorrectly advised to smooth an interest rate path may end up accommodating an inflationary shock instead of fighting it. Private agents incorrectly believing in a certain type of Taylor rule to be the guide for monetary policy conduct, will misjudge future interest rates and make incorrect investment decisions.

Overall, rather than criticizing the simple policy rules literature, we aim to point out a range of issues that should be taken into account if Taylor rules are to be used as a basis for policy advice. As Svensson (2003, p.429) notes, “no central bank has so far made a commitment to a simple instrument rule like the Taylor rule or variants thereof. Neither has any central bank announced a particular instrument rule as a guideline.” Nevertheless, a vast industry of academicians, journalists, advisors, consultants, and even practitioners of an academic nature, argues about the central bank behavior using Taylor rules as a basis.3 We suggest that many sophisticated rules found in the literature may be a statistical misrepresentation of policy rules, that in reality might be less sophisticated, but more appealing in practical terms.

The paper is organized as follows. Section II surveys the existing literature, describing the Taylor rule, its modifications, its uses as well as its abuses. An eager reader, however could jump straight to Section III, which describes the empirical exercise - the simulation procedure and the results. Section IV concludes.

II. Taylor Rules: A Literature Survey

A. The Taylor Rule and Its Modifications

The best-known simple instrument rule is the Taylor rule, where the instrument – the nominal short-term interest rate – responds only to inflation and to the output gap. Taylor (1993) suggested this rule as an explanation of the monetary policy setting for the early years of Alan Greenspan’s chairmanship of the Board of Governors of the U.S. Federal Reserve System, thereafter “the Fed” (1987–92). Since the rule described a complicated process in very simple terms and fitted the data very well, it quickly became very popular. We start by describing the original Taylor rule (1993) and present the modifications it has since undergone.

The Taylor rule (1993) is defined as

where it is the short-term nominal interest rate in period t; r* is the real interest rate target; πt- π* is the “inflation gap,” a difference between actual inflation πt and inflation target π*; yt=logYt-logYt* is the output gap, where Yt is real GDP and Yt* is potential output;4 and the coefficients Cπ and Cy are positive. In the original Taylor (1993) formulation, Cπ and Cy were both 0.5, the inflation and real interest rate targets were 2 percent each, and hence the constant C was equal to 1.

The original Taylor rule has undergone various modifications as researchers have tried to make it more realistic or appropriate. In this paper we will limit our presentation to those modifications suited for closed economies and rules not based on asset prices,5 since these are the ones most commonly used. The discussion and conclusions of the paper, however, are expected to apply to all types of simple monetary policy rules. We document six of these modifications, as well as basic theoretical explanations for the modifications.

One modification to the original rule has been to incorporate forward-looking behavior in order to counteract the seeming shortsightedness of policymakers, making the short-term interest rate a function of central bank expectations of output gap and inflation rather than their contemporaneous values.6

An alternative modification has been to introduce lags of inflation and output gap. It has been pointed out in the literature that because it is not possible to know the actual output gap and inflation at the time of setting the interest rate, using lags would make the timing more realistic (McCallum, 1999a).7

Interest rate-smoothing behavior (including a lagged short-term interest rate among the fundamentals) is the single most popular modification of the Taylor rule. Clarida, Gali, and Gertler (1999) note that although the necessity of including an interest rate-smoothing term has not yet been proven theoretically, it seems rather intuitive for several reasons.8

As simple as it is, the Taylor rule cannot possibly take into account all the factors affecting the economy. Policymakers are known to react not only to movements in the output gap and inflation, but also to movements in the exchange rate, stock market, and political developments, etc. The way to capture this issue would be to introduce a new variable, a so-called policy shock variable, reflecting the judgmental element of the policymaking process.

Some authors suggest the use of unemployment gap as opposed to output gap, to improve the fit of the data, as suggested by Taylor (1999) and Orphanides and Williams (2003). This modification reflects Okun’s law (1962), which links the output gap and the unemployment gap. This type of rule tends to perform quite well in terms of stabilizing economic fluctuations, at least when natural rates of interest and unemployment are accurately measured.9

Finally, it has been suggested to use rates of growth of unemployment, or of the output gap, to account for measurement errors in the real-time estimates of the natural rate of unemployment and/or output (McCallum, 1999a, and Orphanides and Williams, 2003).

B. Uses and Abuses

Taylor rules have been widely used in theoretical and empirical papers, with the latter examining the rules both from descriptive and prescriptive points of view.

The focus of research in theoretical papers has been on whether simple rules solve the time inconsistency bias (McCallum, 1999a); whether they are optimal (McCallum, 1999a; Svensson, 2003; Woodford, 2001; etc.);10 and on how they perform in different macroeconomic models (Taylor, 1999; Isard, Laxton and Eliasson, 1999).11

As for the empirical papers, those with a descriptive point of view include analysis of various specifications and estimations of the Taylor rule (Clarida, Gali and Gertler, 1998; Kozicki, 1999; Judd and Rudebusch 1998; etc.). These studies examine particular historical episodes and address two questions: to what extent are simple instrument rules good empirical descriptions of central bank behavior; and what is the average response of the policy instrument to movements in various fundamentals?12

Empirical papers with a prescriptive point of view suggest what the interest rate should be (McCallum, 1999a and 1999b; Bryant, Hooper, and Mann, 1993; Taylor, 1999), or how it should be set. Commonly, these suggestions are based on rules that are either the outcome of theoretical papers or the result of estimating “good/successful” periods of monetary policy. The potential abuses in prescriptive papers are mainly related to the choice of the benchmark rules, whether based on theory or empirical evidence. In the two following subsections, we document and provide a brief description of the problems that might arise when choosing such rules.

C. Theoretical Choice of a Benchmark Rule

Policy advice based on rules from theoretical models comes from rules simulated or derived in a model or class of models considered representative of the economy. There are potential problems with this approach as documented in the literature and surveyed below.

  • Svensson (2003) and Woodford (2001) warn that commitment to simple rules may not always be optimal, as a simple policy rule may be a solution too simple for a task as complex as that of a central bank.13

  • Simple policy rules may not be robust across different models. Due to uncertainty about the true model of the economy and/or potential output levels, the most recent theoretical efforts have concentrated on suggesting a set of robust simple rules that could be used as a basis for policy advice, as in Giannoni and Woodford (2003a and 2003b), Svensson and Woodford (2004), Walsh (2004), etc. Isard, Laxton and Eliasson (1999) show that several classes of Taylor type rules perform very poorly in moderately nonlinear models.

  • Several recent papers show that, when the central bank follows Taylor type rules in sticky price models of the type that fit the U.S. data well, the price level may not be determined, and there could be several paths for the instrument and multiple equilibria, all coming from the same model with the same rule (Benhabib, Schmitt-Grohe, and Uribe, 2001; Carlstrom and Fuerst, 2001; etc.).

  • How policymakers should respond to the presence of measurement errors is a question with no firm answer yet. While some researchers advocate a more cautious approach, with smaller response coefficients (Orphanides, 2001), others advocate a more aggressive approach (i.e., with larger coefficients) to policymaking (Onatski and Stock, 2002). Finally, some studies have argued in favor of “certainty equivalence,” which implies no changes in policymakers’ behavior and response coefficients (Swanson, 2004).

  • Most theoretical papers talk about inflation in rather generic terms. Thus, when it comes to policy prescriptions, it is not clear what particular measure should be used − Consumer Price Index (CPI), core CPI, CPI less food and energy, GDP deflator, etc. Even after a particular index is chosen, there are more choices to make: annual or quarterly; if annual, is it the average of quarterly numbers or a growth rate over the four quarters? Is the growth in CPI calculated as a log difference or a ratio? Even though the differences between these various calculations could be minimal in a case of low and stable inflation, one should be aware of these caveats. Similar issues arise when it comes to measuring the output gap.

  • Any formula-based recommendation is bound to ignore the judgmental element, which reflects policymakers’ account of other developments not reflected in the output gap or inflation behavior.

D. Empirical Choice of a Benchmark Rule

Policy advice based on rules from empirical papers comes, usually, from estimating a period that is considered “good” or “successful” in combating inflation, promoting output growth, or both. Like the theoretical approach this empirical approach brings with it several problems.

  • Rogoff (2003) notes that it is not clear how much credit policymakers deserve for the exceptionally good performance of many economies in the last 15 years or so. He notes that the achievement of price stability globally may be due not only to good policymaking but also to the favorable macroeconomic environment. The main cause that he identifies is globalization, which through increased competition has put a downward pressure on prices.

  • Stock and Watson (2003) also argue that improvements in the conduct of monetary policy after 1979 are only partially responsible for reducing the variance of output during business cycle fluctuations. This could have been caused by “improved ability of individuals and firms to smooth shocks because of innovation and deregulation in financial markets” (Stock and Watson, 2003, p. 46). They also note that during this period, macroeconomic shocks were “unusually quiescent” (p. 46).

  • Even if one finds empirically a Taylor rule, it does not imply that it is the basis of the monetary policy decision making. The empirical relationship found may be a reflection of something else – a long-term relationship among the nominal interest rate, inflation, and the output gap,14 or a reflection of a completely different kind of monetary policy.15

  • Also as Svensson and Woodford (2004, p. 24) note, “Any policy rule implies a ‘reaction function,’ that specifies the central bank’s instrument as a function of predetermined endogenous or exogenous variables observable to the central bank at the time that it sets the instrument.” They warn that this “implied reaction function” should not, in general, be confused with the policy rule itself (page 24) (see also footnote 12).

  • When making policy prescriptions, can one really impose the implied response coefficients and targets of one economic or policy regime on another, without accounting for changes in the structure of the economy? Greenspan particularly has warned about this abuse on several occasions, including in January 2004 in his speech to the American Economic Association (AEA) meetings: “Such rules suffer from much of the same fixed-coefficient difficulties we have with our large-scale models.”

  • Even though there may be no changes in the economy, there may be changes in the attitude of policymakers. Such changes could be reflected in a shifting of targets for real interest rate or inflation (which, in terms of Taylor rules, translate into a different constant), or there may be changes in the weights that policymakers assign to inflation variance and output gap variance (which, in terms of Taylor rules, translates into different inflation and output gap response coefficients).

  • Coefficients might not be estimated with a very high degree of precision, and standard errors could be quite large. Once the size of the confidence intervals is taken into account, the policy recommendations on how the instrument should be set could become blurred.

  • While coefficients may be estimated for very particular measures of inflation and/or the output gap (for example, CPI less food and energy and Hodrick-Prescott (HP) detrended log output), it is the values only that get “remembered.” When policy recommendations are made, these coefficients may be coupled with different measures (for example, GDP deflator and linearly detrended log output, which in general results in larger values for the output gap than the HP detrending), without taking into account that the coefficients would have been different had these alternative measures been used for the estimation.

  • As mentioned above, any formula-based recommendation is bound to ignore the judgmental element, which is an important factor behind policy decisions.

  • Finally, can we actually estimate the rules properly? As the next section argues, the answer is “not really,” at least not with the methods commonly used today.

E. Estimating Taylor Rules

The rules are usually estimated using either ordinary least squares (OLS), if they are backward looking (see, for example, Orphanides, 2001), or instrumental variables and generalized method of moments (GMM) if they are forward looking (see, for example, Clarida, Gali, and Gertler, 2000), and it is not obvious that the following econometric problems are addressed properly, or always taken into consideration:

  • The most obvious econometric question is how to deal with high serial correlation of the variables. The common recipe is to use Newey-West standard errors and serial correlation robust estimators in order to account for heteroschedasticity, and instrumental variables to account for the forward-looking rules. What is worth noting, however, is that while papers estimating Taylor rules commonly treat interest rates as stationary series, most term structure and money demand papers treat interest rates of various maturity as I(1) series,16 which would call for different econometric techniques.

  • The estimates are not very robust to differences in assumptions or estimation techniques. Jondeau, Le Bihan, and Galles (2004) show that, over the baseline period 1979–2000, alternative estimates of the Fed’s reaction function using several GMM estimators17 and a maximum likelihood estimator yield substantially different parameter estimates. Estimation results may also not be robust with respect to sample periods, to different sets of instrumental variables, or to the order of lags (when lags of variables are used as instruments).

  • In addition, estimation of the Taylor rules very often requires inputs from separate estimation exercises, such as an evaluation of the output gap or NAIRU. These procedures are subject to the same kinds of problems, and, hence, the level of uncertainty around coefficients doubles.

  • As in other empirical papers, making policy recommendations based on rules estimated from a short sample is not advised. This caveat applies especially to countries that have short periods of stable data.

  • The alternative use of long samples often ignores the possibility of changes in the parameters of the rule—response coefficients or real interest rate or inflation targets. For example, one should make a distinction between the monetary regime of the Fed during Paul Volcker’s chairmanship and that during Greenspan’s chairmanship. While, in both periods, the Fed was committed to price stability, it is doubtful that inflation targets were the same.18 A former Fed Governor, Janet Yellen (Federal Reserve Board, 1995, pp. 43–4), confirms this implicitly when she says that the Taylor rule seems to be a good description of the Fed’s behavior since 1986, but not of its behavior from 1979 when Volcker was appointed chairman, to 1986.19

  • A rather important but still commonly overlooked caveat has been given by Orphanides (2001, p. 964). He finds that real-time policy recommendations differ considerably from those obtained with ex post revised data, and that estimated policy reaction functions based on such data provide misleading descriptions of historical policy and obscure the behavior suggested by information available to the Fed in real time.20

  • The illusionary effects of a stronger or weaker response to movements in certain fundamentals that arise due to their horizon misspecification are documented by Orphanides (2001). He shows that the policy reaction function, which has forward-looking behavior but includes forecasts of less than four quarters ahead, has higher estimates for the lag of the federal funds rate and for the output gap, but lower estimate for inflation, compared with the specification with forecasts of four quarters ahead.

  • Another illusionary effect, which is caused by monetary policy inertia, is documented by Rudebusch (2002). He argues that a policy rule with interest rate smoothing is difficult to distinguish from a rule with serially correlated policy shocks.21 While in the former persistent deviations from the output gap and inflation response occur because policymakers are deliberately slow to react, in the latter these deviations reflect policymakers’ response to other persistent influences. Rudebusch proposes to distinguish between the two by analyzing the interest rate term structure.

III. Simulations

The empirical part of our paper illustrates in a mathematical way, via Monte Carlo simulations, how illusions involving certain types of Taylor rules may arise. We show how easy it is to confuse a particular setting with one that is very different theoretically. Our simulations demonstrate a substantial degree of statistical illusion, yielding an impression of a monetary policy more sophisticated than it is assumed to be.

A. The Model

The model used in our simulations is described in Rudebusch and Svensson (1999). It consists of two equations – a Phillips curve, where quarterly inflation is determined by its four lags and an output gap lag; and an IS curve, where a quarterly output gap is determined by its own two lags and an annual real interest rate. Here we use the Rudebusch (2001) version of the model:

where π denotes the inflation rate, y denotes the output gap (deviations of output from potential), and i denotes the short-term nominal interest rate. All quarterly variables are annualized, a top bar denotes the annual variable (average of quarterly data), and σε = 1.007 and ση=0.822.

This model is completely backward-looking, implicitly assuming adaptive expectations and has become somewhat a standard tool in policy analysis (see Romer, 2001). Its use in the literature and in this paper, is motivated by the following reasons, as stated in Rudebusch (2001, p. 205); “[…] although [the model’s] simple structure facilitates the production of benchmark results, this model also appears to roughly capture the views about the dynamics of the [U.S.] economy held by some monetary policymakers.” Also, “the model can be interpreted as a restricted VAR, and appears to be stable over various sub-samples.”(p. 205) Last but not least, the model is a standard used in the literature to analyze the performance of Taylor rules, because, as Taylor (1999) notes, although it has nonrational expectations, it is empirically more accurate. The literature suggests that alternative forward-looking frameworks do not fit the observed data well unless there are some agents that are to some degree backward looking (e.g., Ball, 2000; and Roberts, 1998).

The model assumes that a policymaker can affect inflation only within two periods, as monetary policy has an effect on output gap with a one-period lag, and output gap likewise affects inflation with a one-period lag. The lagged inflation coefficients in the Phillips curve are restricted so that their sum is equal to 1; however, the results are very similar even without imposing this restriction. Finally, note that the model implies the steady state where y* = 0 and i* = π*.

To close the model, we assume that the policymaker sets the quarterly interest rate according to a Taylor rule as follows:

The rule is very similar to that proposed in Taylor (1993). The only difference is that the interest rate responds to the quarterly lags on the fundamentals, rather than their contemporaneous values, as the lags reflect the latest information, which a policymaker can actually observe (McCallum, 1999a). Also, the rule is consistent with the zero steady state (y* = i* = π* = 0) implied by the model.

This rule can be characterized as “naïve to the fourth degree.” First, the monetary policy setting is explained by only two fundamentals – the output gap and inflation. Second, the monetary policy setting is backward, with the short-term interest rate reacting only to the latest observed values of the fundamentals. Third, the rule assumes that a policymaker ignores possible data measurement errors. And finally, the rule is completely mechanical, with no judgmental element. Nevertheless, we use this type of rule because it makes the results very tractable. The rule also makes sense intuitively.22

To extend our analysis, we also conduct separate simulations with a Taylor rule augmented by a serially correlated policy shock:

Serially correlated policy shocks may arise for various reasons. They could reflect serially correlated disturbances beyond those captured by movements of lagged inflation and/or the output gap, such as credit crunches, oil and commodity price shocks, exchange rate movements, and stock market developments.

These shocks could also represent a measurement error, the difference between what the policymakers thought about the state of the economy in real time and what researchers know ex post. Such errors will be serially correlated if, for example, errors arise because of changes in productivity trends. In the early 1970s, productivity growth slowed down − a development that was not immediately understood and which led to differences between the actual growth of potential output and what was perceived to be such. A similar development occurred in the mid–1990s, when productivity growth increased, leading potential output to grow faster than expected. Both these events helped to cause the emergence of what ex post could look like a serially correlated policy shock (see also footnote 9).

B. Procedure

We simulate 1000 periods, and use the last 970 for estimations. Simulations are run 500 times, and each of the scenarios assumes that, for the first four periods, the economy is in the zero steady state (y* = i* = π* = 0).

We use the data obtained in order to see whether our estimations allow us to identify the policy rule as it is, or whether they document an illusionary presence of more sophisticated versions of the Taylor rule – which are forecast based, have embedded interest rate smoothing, or respond to the growth rates of the fundamentals.

For each 500 simulations, we estimate different rules and then report averages of the estimated coefficients, standard errors, adjusted R-squared, Durbin-Watson (DW) statistics, and sum of squared residuals (SSR) across the 500 scenarios.23

Rules are estimated using GMM robust to serial correlation24 and the usual suspects to instrument for these variables (lagged values of interest rates, inflation, and the output gap),25 as in Clarida, Gali, and Gertler (2000). We follow standard practices and estimate the interest rate rule as a single equation.

As mentioned above, we simulate two different cases – one where interest rates are set in the absence of any policy shocks and the other where interest rates are affected by a serially correlated policy shock. In the second case, we estimate the rules by first excluding an autoregressive component and then including it.

In each of the cases, we first estimate the true rule, i.e., a rule based on lagged values of inflation and the output gap. Then, we estimate rules with a misspecified horizon, where the interest rate depends on expected inflation and the output gap. Next, we include the lagged interest rate among the regressors, and, finally, we add variables such as output gap growth and the inflation differential.

C. Results. No Policy Shock Scenario

When evaluating the rules with correctly specified timing of the fundamentals, we correctly identify the coefficients of the rule, assigning zero values to every included irrelevant variable, such as the lagged interest rate, inflation differential, and output gap growth. We obtain an adjusted R-squared of 1.00, SSR of 0.0, and DW statistics of 2.00.

However, when the rules with incorrectly specified timing of the fundamentals are evaluated, we obtain incorrect but plausibly looking assessments. Table 2 shows results of estimating a rule with a simple functional form when the true rule is based on lags.

Table 2.Simple Functional Form(True Rule Based on Lagged Fundamentals)
HRZN kINFLGAPSSRAdj R2DW
-11.50

(0.00)
0.50

(0.00)
0.01.002.00
01.50

(0.01)
0.41

(0.01)
411.450.981.20
21.50

(0.02)
0.18

(0.04)
1757.680.930.37
41.49

(0.04)
-0.10

(0.07)
3012.620.870.23

The results in Table 2 should be read as follows. Each row corresponds to horizon k of the fundamentals. The true rule has a horizon -1, since the rules used in the simulations had interest rate as a function of lagged inflation and output. When the horizon is 4, instead of estimating a rule based on lagged output and inflation, we estimate a rule based on expected fundamentals four quarters ahead.

The general form for the rule is as follows, where k is the indicated horizon in the table:

The averages of coefficients are reported in the first row for each horizon, and the standard errors in the subsequent row. The rest of the row includes the main statistics for the specification.

All the rules produce very high adjusted R-squared, while the DW statistic sharply deteriorates as the horizons of the variables increase. Now notice that the rule reported in the last row of Table 2 is a description of a forward-looking (one-year-ahead) policy, where a policymaker eyes inflation, ignores movements in the expected output gap (the output gap coefficient is statistically insignificant), and takes into account other events (collected in a judgmental policy shock ζ). This rule, therefore, represents a sort of an inflation-targeting rule:

We will use this example to illustrate how this “reduced-form” effect works. Note that forecast inflation and the output gap can be represented as sums of the respective leads and error terms. These leads can, in turn, be to a different degree approximated by the lags – inflation better, output gap worse. Substituting these into the rule allows an almost precise retrieving of the true rule:

where the last term represents the sum of the “original” policy error ζ, forecast errors (i.e., the difference between the leads and expectations), and residuals ζ from approximating the leads of the inflation and the output gap by their lags.26

Next, using the same data we generated earlier, we estimate a very popular version of the rule, where the policymaker smoothes the path for the interest rate, having in mind the interest rate target as prescribed by the original Taylor rule, but constraining him/herself by avoiding big jumps in the value of the instrument. The estimated rule therefore has the following form:

Again, when the timing of the fundamentals is specified correctly, the rule is identified precisely, as it should be. However, once the forecasts have been used, there is an illusion of significant smoothing. Once more, the value of the inflation coefficient looks very reasonable, while the output gap coefficient declines, albeit staying positive. Thus, a not very careful researcher might claim the following policy setting, as in the last row of Table 3:

Table 3.Smoothing Functional Form(True Rule Based on Lagged Fundamentals)
HRZN kFFR(-1)INFLGAPSSRAdj R2DW
-10.00

(0.00)
1.50

(0.00)
0.50

(0.00)
0.01.002.00
00.40

(0.01)
1.52

(0.01)
0.53

(0.01)
154.750.992.35
20.59

(0.02)
1.61

(0.03)
0.58

(0.06)
462.040.980.93
40.53

(0.04)
1.64

(0.05)
0.34

(0.12)
1017.710.960.48

Such a setting fits well with our understanding of the monetary policy—it is forward looking and sufficiently active in responding to inflation and the output gap, moves the instrument variable cautiously, and has a judgmental element in it. A good fit would allow the production of some nice charts and some reasonable historical evidence.

Similar to the “inflation-targeting” rule (8), we illustrate the mechanics of the illusion by replacing the forecasts by approximations of the leads of inflation and the output gap by their lags, after which we again almost arrive at the true specification:

In the literature, the coefficient on the lagged interest rates is usually estimated to be large, about 0.7–0.8 (see Rudebusch, 2002), and quite a few papers have been written trying to explain this over cautiousness on the part of central bankers. Our results suggest that such carefulness could be just a statistical illusion, caused by the misspecification of the rule, in particular by the incorrect specification of the timing of the fundamentals.

In addition to the rules in Tables 2 and 3, we could suggest others that fit our simulated data just as well. Similar to Table 1, we compile a list of rules that describe the same data but have a very different interpretation of the monetary policy conduct. The first three rules below come from line 1 of Table 2 and the last lines of Table 2 and Table 3. The rest of the rules come from additional estimations.

The three new rules also produce a good fit and have inflation coefficients close to the true values (within the 1.47–1.57 range), but assume that policymakers follow closely not only the values, but also the growth of fundamentals (expected or contemporaneous).

Most of these rules report very low DW statistics. This signals positive serial correlation in the error terms, and, thus, a thorough econometric analysis would likely reject most of them. Nevertheless, it seems that many researchers in the field tend to report results consistent with their a priori beliefs instead of giving these results the benefit of the doubt.

D. Results: Serially Correlated Policy Shock Scenario

What we documented in the previous subsection was a reduced-form effect similar to the one Orphanides (2001) discusses. The true rule has a lagged interest rate and lagged inflation in it. The illusion arises because these lagged fundamentals, together with the lagged interest rate, can closely approximate expected fundamentals.

In this subsection, we describe a different – “high-persistence” – mechanism, which leads to similar illusionary effects. We use data obtained by simulating the model with the Taylor rule that includes an autoregressive policy shock (see page 14).

First, we estimate the rules without including an autoregressive (AR) component and present the results in Table 4. They are very similar to those reported in Table 2. Misspecifying the horizon of the fundamentals still yields reasonable results, both in terms of fit and coefficients, though not in terms of DW statistics.

Table 4.Simple Functional Form, Omitted AR1 Component(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)
HRZN kINFLGAPSSRAdj R2DW
-11.47

(0.02)
0.44

(0.02)
778.970.970.18
01.47

(0.02)
0.33

(0.03)
1171.720.960.51
21.45

(0.03)
0.05

(0.04)
2351.800.910.32
41.41

(0.04)
-0.32

(0.07)
3395.790.870.25

The results in Table 5, however, demonstrate that one who does not realize that the policymaker is answering to serially correlated disturbances and incorrectly estimates the rule based only on lagged inflation, the output gap, and the interest rate, will find evidence of interest rate smoothing. The illusion of smoothing arises already at the k = -1 horizon, driven by a serially correlated policy shock, as indicated by low DW statistics.

Table 5.Smoothing Functional Form, Omitted AR1 Component(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)
HRZN kFFR(-1)INFLGAPSSRAdj R2DW
-10.52

(0.03)
1.50

(0.02)
0.60

(0.03)
357.950.990.18
00.60

(0.02)
1.52

(0.02)
0.61

(0.03)
357.320.991.54
20.69

(0.02)
1.61

(0.04)
0.66

(0.08)
539.250.981.17
40.67

(0.06)
1.67

(0.06)
0.48

(0.16)
847.050.970.80

Once the horizon of the fundamentals has been misspecified, the coefficient on the lagged interest rate becomes even larger compared with both the lagged specification of the rule and the results reported in Table 3; this happens because both reduced-form and high-persistence effects are now in place.

The last specification is worth flagging as it is very similar to that reported in the literature, with a large coefficient on the lagged interest rate (as mentioned above, the suggested range is 0.7–0.8; see Rudebusch, 2002), coefficients on the inflation and the output gap similar to those originally suggested by Taylor, and a very good fit:

Our simulation exercise suggests that such results may be illusionary, as explained by the misspecification of the horizon of the fundamentals included in the rule and by the omission of the autoregressive policy shock, which could reflect a serially correlated measurement error.27

Inclusion of the AR(1) component allows a precise identification of the rule, as long as the horizon is specified correctly (see Table 6). Misspecification of the horizon significantly reduces coefficients, though adjusted R-squared and DW statistics remain high.

Table 6.Simple Functional Form, Included AR1 Component(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)
HRZN kINFLGAPAR(1)SSRAdj R2DW
-11.50

(0.02)
0.50

(0.01)
0.92

(0.01)
125.871.002.00
01.62

(0.08)
0.98

(0.12)
0.86

(0.03)
1079.570.962.20
21.17

(0.05)
-0.45

(0.10)
0.83

(0.02)
823.190.971.29
41.06

(0.00)
-1.14

(0.18)
0.89

(0.00)
1517.580.941.52

Inclusion of both the AR(1) component and the lagged interest rate produces results without any clear patterns and with rather large dispersions across simulated scenarios. The explanation would be a correlation between the implied interest rate differential and the AR(1) component. The only exception is, again, the case with a correctly specified horizon, which allows for exact identification of the rule.

To illustrate complications arising when one tries to distinguish between serial correlation of the policy error and interest rate smoothing, we conduct a test similar to that described in Rudebusch (2002). We estimate a nested equation that allows for testing two hypotheses – one a hypothesis of a serially correlated (SC) policy shock and the other of interest rate smoothing (SM):

If the rule includes a smoothing component but not the serially correlated policy shock, then the estimation should yield a zero coefficient ρ2 in place of the “nest” variable. If the rule includes a serially correlated policy shock but not a smoothing component, then the nest variable is equal to the lagged interest rate net of the lagged policy shock. Hence, ρ1 and ρ2 should be equal (and equal to the autoregressive parameter of the policy shock) in order to cancel the effect of the lagged interest rate.

The idea is to run this test at different horizons and see whether misspecification of the horizon leads to different conclusions. The results are presented in Table 7, which, in addition, reports P-values for the two hypotheses. They are similar to those discussed above. When the horizon is specified correctly (k = -1), the hypothesis H1SM is rejected, but the hypothesis H1SC is not, as should be the case. At the zero horizon, both hypotheses are rejected, while at their higher horizons, neither are.

Table 7.Testing Smoothing Versus Serially Correlated Policy Shock(True Rule Based on Lagged Fundamentals and Serially Correlated Policy Shock)
HRZN kINFLGAPFFR(-1)NESTSSRAdj R2DWSMSC
P-values
-11.50

(0.04)
0.50

(0.01)
0.92

(0.02)
0.92

(0.02)
125.931.002.000.000.50
01.10

(0.12)
1.25

(0.15)
0.91

(0.02)
0.81

(0.03)
1,3780.952.110.000.00
20.34

(0.14)
0.36

(0.16)
0.71

(0.03)
-0.62

(4.45)
1,1620.961.250.350.14
4-0.10

(0.40)
-0.54

(0.00)
0.64

(0.00)
1.27

(7.00)
1,5480.941.220.120.14

This, again, shows that, with a misspecified horizon, the two patterns cannot be statistically distinguished. In particular, for forward-looking specifications, neither of the hypotheses – serial correlation or smoothing behavior – is rejected.28

IV. Conclusion

The vast literature on simple monetary policy rules still has to settle on empirical benchmark rules that can be used for policy recommendations, and on simple theoretical benchmark rules robust to model and output gap misspecifications. It is particularly important to take stock by surveying how Taylor rules are estimated and used for policy recommendations in the current literature (in closed economies, with a focus on empirical U.S. models), and by showing the problems involved in estimating these rules.

We start by documenting potential abuses of Taylor rules. This happens when these rules (and, in general, simple policy rules) are used as a basis for projections, or as a guide for policymakers, when they are misspecified, incorrectly estimated, or not optimal. We provide a list of factors found in the literature that could contribute to such mistakes, ranging from straightforward empirical difficulties, such as short samples or serial correlation of the variables, to problems of a theoretical nature, such as the emergence of multiple equilibria under Taylor rules or their nonoptimality.

It is even more important to document a particular estimation problem – a misleading impression of certain types of behavior – by simulating a simple model paired with a lag-based Taylor rule and using the data obtained to estimate the monetary policy rule. The estimation results indicate several other types of rules, suggested in the literature, including forward-looking monetary policy and/or a rule with interest rate smoothing. These results demonstrate that there may be a high degree of statistical illusion, through which an impression of a more sophisticated monetary policy than it actually is, is found.

Our results are consistent with Orphanides (2001), who demonstrates that changing the horizon of the fundamentals in the rule changes the estimated coefficients; they are also consistent with Rudebusch (2002), who argues that a serially correlated policy shock may generate an illusion of interest rate smoothing.

The results of this paper and the evidence from the literature suggest that there could be “too big” conclusions drawn based on “too little” evidence. That could lead to some negative consequences.

The most straightforward example is a situation where a policymaker is incorrectly advised to smooth the nominal interest rate path, and therefore not to react to new developments too strongly. Faced with an inflationary shock, such a policymaker will end up accommodating inflation, thereby unnecessarily destabilizing the economy.

Another example is a possible miscommunication between a central bank and the private sector. If the latter builds its expectations on a particular version of a Taylor rule that is erroneously believed to be the central bank’s policy, it may misjudge the future path of interest rates and make wrong investment decisions.

The least obvious example is a misallocation of central bank resources in developing economies. Implementation of an expectation-based Taylor rule with interest rate smoothing and no judgment implies a rather different strategy for a central bank striving to build its capacity than implementation of a lag-based rule with no smoothing and a substantial amount of judgment.

Although we limit the survey and the model simulations to the United States, the conclusions of this paper apply widely, since these types of misestimations are not unique to U.S. data, but are the best documented for U.S. data. In other countries, especially in emerging market and developing countries, it is even harder to specify correctly simple policy rules, due to major structural breaks, implementation of “stop-and-go” policies, and lack of consistent data. Moreover, since in these countries the exchange rate channel is one of the strongest mechanism of monetary policy transmission, the rule becomes more complicated and hence, the potential for misestimation is even higher.

The goal of this paper is not to criticize Taylor rules, but their careless use. Taylor rules are very useful to summarize the monetary policy in rather simple terms. This simplicity however, limits the scope of policy advice one can make based on these rules. In particular, Taylor rules are not likely to be extremely useful in dramatic circumstances, when for example, asset bubbles burst, exchange rate volatility rises, or capital flows get reversed.

As Svensson (2003) points out, there is a substantial gap between the research on policy rules and the actual policymaking process. Policymakers (as well as many researchers) believe that simple rules should not be followed mechanically (Taylor, 1993 and 1999), but rather used as guidelines. Fed Chairman Alan Greenspan emphasized this point in several of his speeches, in particular, at the American Economic Association meetings in January 2004, noting that “prescriptions of formal rules can, in fact, serve as helpful adjuncts to policy, as many of the proponents of these rules have suggested. But at crucial points, like those in our recent policy history … simple rules will be inadequate as either descriptions or prescriptions for policy.” Taylor (1999, p. 14) himself states that “when I proposed a specific simple policy rule in 1992, I suggested that the rule be used as a guideline along with several other policy rules.”

Guidelines would imply non-trivial deviations of actual policy from the Taylor rule prescriptions in unusual circumstances (for example, stock market crashes and political developments). Confusingly enough, many academic papers provide seemingly convincing evidence of policymakers closely and successfully following sophisticated policy rules instead of using them as mere guidelines. In this paper, we argue that such examples may be merely a statistical illusion, thus supporting the case for Taylor rules being more of a guideline rather than an explicit framework. In particular, we claim that these sophisticated versions of a Taylor rule documented in the literature could be a statistical misrepresentation driven by incorrect specification of the horizon of the fundamentals and by omission of the serially correlated policy shocks or measurement errors.

This type of problem is not only pertinent to monetary economics. For microeconomics and industrial organization, Klemperer notes in his 2003 (abstract) paper called “Using and abusing economic theory,” that the “economic theory is often abused in practical policy-making. There is frequently excessive focus on sophisticated theory at the expense of elementary theory; too much economic knowledge can sometimes be a dangerous thing.”

As Blinder (1997, p. 17) notes, “central banking in practice is as much art as science.” He points out that science helps in practicing this art. In a field as essential as monetary economics, it is also important to ensure that science does not drift too far away from the art.

APPENDIX I

Description of the Taylor Rules Specified in Table 1

1. Taylor (1993)

The 1987:Q1–1992:Q3 sample is covered. Output gap is constructed as deviations of log output from its linear trend. To estimate the trend, the 1984:Q1–1993:Q2 sample is used. Inflation is measured as the annual growth rate of the GDP deflator. This rule is hypothetical, not estimated.

2. Clarida, Gali and Gertler (2000). Page 157, Table II

The 1979:Q3–1996:Q4 sample is covered. Output gap is based on the estimates of the Congressional Budget Office (CBO). The paper also used estimates constructed as deviations of log output from its quadratic trend. Inflation is measured using the annualized rate of change of the GDP deflator between two subsequent quarters. The rule is estimated using GMM.

3. Orphanides (2001). Page 980, Table 6

The 1987:Q1–1993:Q4 sample is covered. Real-time data are used − actual Fed forecasts for inflation (GDP deflator) and output gap calculated at the moment of decision taking (from the so-called Greenbook). The rule is estimated with OLS.

4. Ball and Tchaidze (2002). Page 111, Table II

The 1987:Q4–1995:Q4 sample is covered. Inflation is measured using annual growth of the GDP deflator. Time series for NAIRU are calculated based on real time estimations available in the economic literature. The rule is estimated using OLS.

5. Orphanides and Williams (2003). Page 46, Table 5 (Generalized rule optimized for s=0)

This rule is optimized in a forward-looking model estimated for the U.S. economy, for the 1969–2002 period. The model assumes that policymakers have perfect information about the natural rate of interest and unemployment.

This rule is different from the others, in that the first four in fact attempt to describe the actual policy. However, we decided to include it because of its interesting functional form. Moreover, since many observers (used to) believe that actual Fed behavior was optimal, these rules could be somewhat comparable.

References

    AurayStephane and PatrickFeve2003Money Growth and Interest Rate Rules: Is There an Observational Equivalence?IDEI Working Paper (Toulouse France: Institut d’Economie Industrielle).

    AurayStephane and PatrickFeve2004Do Models with Exogenous Money Supply Produce a Taylor Rule Like Behavior?IDEI Working Paper (Toulouse France: Institut d’Economie Industrielle).

    BabaYoshihisaDavid F.Hendry and Ross M.Starr1992The Demand for M1 in the USA, 1960–1988Review of Economic StudiesVol. 59 (January) pp. 2561.

    BallLaurence2000Near-Rationality and Inflation in Two Monetary RegimesNBER Working Paper No. 7988 (Cambridge, Massachusetts: National Bureau of Economic Research).

    BallLaurence and RobertTchaidze2002The Fed and the New EconomyAmerican Economic Review Papers and ProceedingsVol. 92 (May) pp. 10814.

    BenhabibJessStephanieSchmitt-Grohe and MartinUribe2001Monetary Policy and Multiple EquilibriaAmerican Economic ReviewVol. 91 (March) pp. 16786.

    BlinderAlan S.1986More on the Speed of Adjustment in Inventory ModelsJournal of Money Credit and BankingVol. 18 (August) pp. 35565.

    BlinderAlan S.1997What Central Bankers Could Learn from Academics and—Vice VersaJournal of Economic PerspectivesVol. 11No. 2 pp. 319.

    BryantRalph C.PeterHooper and Catherine L.Manneds. 1993Evaluating Policy Regime: New Research in Empirical Economics (Washington: Brookings Institution).

    CarlstromCharles T. and Timothy S.Fuerst2001Timing and Real Indeterminacy in Monetary ModelsJournal of Monetary EconomicsVol. 47No. 2 pp. 28598.

    ChadhaJagit S.LucioSarno and GiorgioValente2004Monetary Policy Rules, Asset Prices and Exchange RatesStaff Papers International Monetary FundVol. 51No. 3 pp. 52952.

    ClaridaRichardJordiGalí and MarkGertler1998Monetary Policy Rules in Practice: Some International EvidenceEuropean Economic ReviewVol. 42 (June) pp. 1033-67.

    ClaridaRichardJordiGalí and MarkGertler1999The Science of Monetary Policy: A New Keynesian PerspectiveJournal of Economic LiteratureVol. 37No. 4 pp. 1661707.

    ClaridaRichardJordiGalí and MarkGertler2000Monetary Policy Rules and Macroeconomic Stability: Evidence and Some TheoryQuarterly Journal of EconomicsVol. 115 (February) pp. 14780.

    EnglishWilliam B.William R.Nelson and Brian P.Sack2003Interpreting the Significance of the Lagged Interest Rate in Estimated Monetary Policy RulesContributions to MacroeconomicsVol. 3No. 1Article 5Berkeley Electronic Press.

    Federal Reserve Board1995Federal Open Market Committee Transcripts, FOMC MeetingJan. 31–Feb. 1. Available via Internet: http://www.federalreserve.gov/fomc/transcripts

    Gerlach-KristenPetra2004Interest-Rate Smoothing: Monetary Policy Inertia or Unobserved Variables?Contributions to MacroeconomicsVol. 4No. 1Article 3 (Berkeley Electronic Press).

    GiannoniMarc and MichaelWoodford2003aOptimal Interest-Rate Rules: I. General TheoryNBER Working Paper No. 9419 (Cambridge, Massachusetts: National Bureau of Economic Research).

    GiannoniMarc and MichaelWoodford2003bOptimal Interest-Rate Rules: II. ApplicationsNBER Working Paper No. 9420 (Cambridge, Massachusetts: National Bureau of Economic Research).

    GreenspanAlan2004Risk and Uncertainty in Monetary PolicyAmerican Economic ReviewVol. 94No. 2 pp.3340.

    GrillichesZvi1967Distributed Lags: A SurveyEconometricaVol. 35 (January) pp. 1649.

    IsardPeterDouglasLaxton and Ann-CharlotteEliasson1999Simple Monetary Policy Rules Under Model UncertaintyInternational Tax and Public FinanceVol. 6 (November) pp. 53777.

    JondeauEricHerveLe Bihan and ClementineGalles2004Assessing Generalized Method of Moments Estimates of the Federal Reserve Reaction FunctionJournal of Business and Economic StatisticsVol.22No. 2 pp. 22539.

    JuddJohn P. and Glenn D.Rudebusch1998Taylor’s Rule and the Fed: 1970–1997Economic Review Federal Reserve Bank of San Francisco No. 3 pp. 316.

    KingRobert G. and AndreKurmann2002Expectations and the Term Structure Of Interest Rates: Evidence and ImplicationsEconomic Quarterly Federal Reserve Bank of RichmondVol. 88No. 4 pp. 4995.

    KlempererPaul2003Using and Abusing Economic TheoryCEPR Discussion Paper No. 3813 (London: Center for Economic Policy Research).

    KozickiSharon1999How Useful Are Taylor Rules for Monetary Policy?Economic Review Federal Reserve Bank of Kansas CityVol. 84No.2 pp. 533.

    LansingKevin2002Real-Time Estimation of Trend Output and Illusion of Interest Rate SmoothingEconomic Review Federal Reserve Bank of San Francisco pp. 17-34.

    McCallumBennett1999aIssues in the Design of Monetary Policy Rules” in Handbook of MacroeconomicsVol. 1C ed. by JohnTaylor and MichaelWoodford(Amsterdam: Elsevier Science, North-Holland).

    McCallumBennett1999bRecent Developments in the Analysis of Monetary Policy RulesFederal Reserve Bank of Saint Louis ReviewVol. 81 (November/December) pp. 311.

    MehraYash P.1993The Stability of M2 Demand Function: Evidence from an Error-Correction ModelJournal of Money Credit and BankingVol. 25 (August) pp. 45560.

    MehraYash P.1999A Forward-Looking Monetary Policy Reaction FunctionEconomic Quarterly Federal Reserve Bank of RichmondVol. 85No. 2 pp. 3353.

    MinfordPatrickFrancescoPerugini and NaveenSrinivasan2002Are Interest Rate Regressions Evidence for a Taylor Rule?Economics LettersVol. 76 (June) pp. 14550.

    OkunA.M.1962Potential GNP: Its Measurement and Significance” in American Statistical Association 1962 Proceedings of the Business and Economic Section (Washington: American Statistical Association).

    OnatskiAlexei and James H.Stock2002Robust Monetary Policy Under Model Uncertainty in a Small Model of the U.S. EconomyMacroeconomic DynamicsVol. 6No. 1 pp. 85110.

    OrphanidesAthanasios2001Monetary Policy Rules Based on Real-Time DataAmerican Economic ReviewVol. 91No. 4 pp. 96485.

    OrphanidesAthanasios2003aThe Quest For Prosperity Without InflationJournal of Monetary EconomicsVol. 50No. 3 pp. 63363.

    OrphanidesAthanasios2003bHistorical Monetary Policy Analysis and the Taylor RuleJournal of Monetary EconomicsVolume 50No. 5 pp. 9831022.

    OrphanidesAthanasios and John C.Williams2003Robust Monetary Policy Rules with Unknown Natural RatesFinance and Economics Discussion Paper Series No. 2003–11 (Washington: Board of Governors of the Federal Reserve System).

    RobertsJohn M.1998Inflation Expectations and the Transmission of Monetary PolicyFinance and Economics Discussion Papers No. 1998–43 (Board of Governors of the Federal Reserve System).

    RogoffKenneth2003Globalization and Global DisinflationEconomic Review Federal Reserve Bank of Kansas CityVol. 88No. 4 pp. 4579.

    RomerDavid2001Advanced Macroeconomics (Boston: McGraw Hill/Irwin2nd ed.).

    RudebuschGlenn D.2001Is the Fed Too Timid? Monetary Policy in an Uncertain WorldReview of Economics and StatisticsVol. 83 (May) pp. 20317.

    RudebuschGlenn D.2002Term Structure Evidence on Interest Rate Smoothing and Monetary Policy InertiaJournal of Monetary EconomicsVol. 49No. 6 pp. 116187.

    RudebuschGlenn D. and Lars E. O.Svensson1999Policy Rule for Inflation Targeting” in Monetary Policy Rules ed. by John B.Taylor(Chicago: University of Chicago Press).

    StockJames H. and Mark W.Watson2003Has the Business Cycle Changed? Evidence and Explanations” in Monetary Policy and Uncertainty: Adapting to a Changing Economy (Kansas City, Missouri: Federal Reserve Bank of Kansas City).

    SvenssonLars E.O.2003What is Wrong with Taylor Rules? Using Judgment in Monetary Policy through Targeting RulesJournal of Economic LiteratureVol. 41 (June) pp. 42677.

    SvenssonLars E.O. and MichaelWoodford2004Implementing Optimal Policy through Inflation-Forecast TargetingCEPR Discussion Paper No. 4229 (London: Center for Economic Policy Research).

    SwansonEric T.2004Signal Extraction and Non-Certainty-Equivalence in Optimal Monetary Policy RulesMacroeconomic DynamicsVol. 8 (February) pp. 2750.

    TaylorJohn B.1993Discretion Versus Policy Rules in PracticeCarnegie-Rochester Conference Series on Public PolicyVol. 39 (December) pp. 195214.

    TaylorJohn B.1999Monetary Policy Rules ed. by JohnTaylor(Chicago: University of Chicago Press).

    TchaidzeRobert2004The Greenbook and U.S. Monetary PolicyIMF Working Paper 04/213 (Washington: International Monetary Fund).

    TrehanBharat and TaoWu2004Time Varying Equilibrium Real Rates and Monetary Policy AnalysisFederal Reserve Bank of San Francisco Working Paper No. 2004-10.

    WalshCarl E.2004Robustly Optimal Explicit Instrument Rules and Robust Control: An Equivalence ResultJournal of Money Credit and BankingVol. 36 (December) pp. 110513.

    WoodfordMichael2001The Taylor Rule and Optimal Monetary PolicyAmerican Economic Review Paper and ProceedingsVol. 91 (May) pp. 23237.

We are grateful to Nils Bjorksten, Oya Celasun, Robert Flood, Jim Morsink, Glenn Rudebusch, Silvia Sgherri, and the participants of the various IMF seminar series as well as SMYE 2004, LAMES 2004 and LACEA 2004 conferences for their comments and suggestions. We thank Philippe Karam and Gauti Eggertsson for technical discussions. We are especially indebted to Keith Küster, Lucio Sarno, and Niamh Sheridan for extensive comments. All remaining errors are our own.

A search in the EconLit database for the keyword “monetary policy rules” for 2000–03 returns 361 published articles, or an average of 90 a year.

The Economist uses Taylor rule prescriptions when describing the stance of monetary policy in the United States, United Kingdom, and euro area. Monetary Trends, published by the St. Louis Federal Reserve Bank, regularly reports on Taylor rule components.

Taylor (1993) identified potential output empirically with a linear trend, while other papers use quadratic, Hodrick-Prescott trends, or other more sophisticated techniques.

For estimations of monetary policy rules with asset prices and exchange rates in industrial countries, see Chadha, Sarno, and Valente (2004).

The central bank expectations considered are either formed within a model, as in Clarida, Gali, and Gertler (2000), or actual estimates of the central bank in real time, as done by Orphanides (2001). Mehra (1999) has estimated short-term interest rate as a function of inflation expectations contained in bond rates.

Lag-based rules are not necessarily backward-looking, since lags serve as indicators of future values (see Tchaidze, 2004).

Reasons mentioned in the literature include model uncertainty, fear of disrupting capital markets, loss of credibility from a sudden large policy reversal, the need for consensus building for a policy change, and the exploitation by the central bank of the dependency of demand on expected future interest rates, signaling the central bank’s intentions toward the general public.

The rules with the unemployment gap appear more attractive as the natural rate of unemployment seemed easier to measure. During the mid–1990s, it was a common belief that NAIRU was 6 percent flat. The productivity growth of the late 1990s and arrival of the so-called New Economy have begun, only with a substantial delay, to challenge this belief (see Ball and Tchaidze, 2002).

One could derive versions of the Taylor rule as a solution to an optimization problem, where policymakers are minimizing a loss function expressed in terms of the weighted average of inflation and output gap variances (see for example, Woodford, 2001).

In terms of stabilizing inflation around an inflation target without causing unnecessary output gap variability.

The two questions commonly get mixed, though they are somewhat independent from each other. Properly formulated, the second question would sound as follows: given the way the monetary authorities are operating, what is the consequential response of the interest rate to movements in inflation and the output gap?

Svensson (2003, p. 429): “Monetary policy by the world’s more advanced central banks these days is at least as optimizing and forward-looking as the behavior of the most rational private agents. I find it strange that a large part of the literature on monetary policy still prefers to represent central bank behavior with the help of mechanical instrument rules.”

As the definition of the rule (see equation (1) on page 5) shows, one may view the Taylor rule as a more sophisticated version of an equilibrium relationship among the three variables (also known as a Fisher equation, i = r + π).

Minford, Perugini and Srinivasan (2002) and Auray and Feve (2003 and 2004) demonstrate that money supply rules may be observationally equivalent to Taylor rules.

King and Kurmann (2002) analyze the term structure of the U.S. interest rates, and Baba, Hendry, and Starr (1992) analyze the U.S. money demand. Both papers find that U.S. interest rates are stationary in first differences, and therefore nonstationary in levels, I (1) series. However, Mehra (1993) finds in money demand studies that U.S. interest rates are stationary series, and Clarida, Gali, and Gertler (2000, p. 154) note that they treat interest rates as stationary series, “an assumption that we view as reasonable for the postwar U.S., even though the null of a unit root in either variables is often hard to reject.”

Two-step, iterative, and continuously updating.

In fact, one may wonder whether the Fed had a constant inflation target during Volcker’s chairmanship (see Tchaidze, 2004).

“It seems to me that a reaction function in which the real funds rate changes by roughly equal amounts in response to deviations of inflation from a target of 2 percent and to deviations of actual from potential output describes tolerably well what this Committee has done since 1986.”(Federal Reserve Board, 1995 p. 43–4)

Orphanides (2003a) shows, contrary to other researchers who claim that U.S. monetary policy in the 1970s was “bad,” leading to high inflation, that policy was “good” but based on “bad,” misleading data.

Such a phenomenon has been documented in the literature before. See Grilliches (1967) and Blinder (1986).

With a forward-looking rule, it becomes rather difficult to disentangle different effects and to understand the misestimation problems. Therefore, we build up from the simplest rule.

These are the most commonly reported statistics in this field.

We use a two-stage least squares procedure with GMM standard errors (Bartlett kernel, fixed bandwidth, and no prewhitening). We have chosen this technique as it ensures fast convergence. However, we found that conclusions are the same when using alternative estimators, such as heteroschedasticity and the autocorrelation-consistent GMM (quadratic kernel, variable Newey West bandwidth, and no prewhitening). The standard deviations of the estimated parameters across 500 simulated scenarios double, but still remain very small.

Four lags of instruments were used, as commonly used in the literature (Rudebusch, 2002; and Clarida, Gali, and Gertler, 2000) and as indicated by the over identifying restrictions test.

The residual term does not disappear since its subcomponents emerge at different stages.

Lansing (2002) also suggests that real-time measurement errors could explain illusion of the presence of the lagged federal funds in estimated policy rule. Trehan and Wu (2004) find that estimated policy rules will tend to exaggerate the degree of interest rate smoothing if the monetary authorities target a time varying equilibrium real interest rate.

The debate on the presence of a lagged interest rate in the estimated policy rules is not settled yet. Rudebusch (2002) argues in favor of a rule with a serially correlated policy shock, as opposed to a specification with a partial interest rate adjustment. English, Nelson and Sack (2003), using a different nested specification, report evidence of both types of behavior. Gerlach-Kristen (2004, p.1) also finds that “both seem to matter, but that policy inertia appears to be less important than suggested by the existing literature.”

Other Resources Citing This Publication