Designing a Simple Loss Function for Central Banks
Does a Dual Mandate Make Sense?
  • 1 0000000404811396 Monetary Fund
  • | 2 0000000404811396 Monetary Fund
  • | 3 0000000404811396 Monetary Fund

Yes, it makes a lot of sense. This paper studies how to design simple loss functions for central banks, as parsimonious approximations to social welfare. We show, both analytically and quantitatively, that simple loss functions should feature a high weight on measures of economic activity, sometimes even larger than the weight on inflation. Two main factors drive our result. First, stabilizing economic activity also stabilizes other welfare relevant variables. Second, the estimated model features mitigated inflation distortions due to a low elasticity of substitution between monopolistic goods and a low interest rate sensitivity of demand. The result holds up in the presence of measurement errors, with large shocks that generate a trade-off between stabilizing inflation and resource utilization, and also when ensuring a low probability of hitting the zero lower bound on interest rates.


Yes, it makes a lot of sense. This paper studies how to design simple loss functions for central banks, as parsimonious approximations to social welfare. We show, both analytically and quantitatively, that simple loss functions should feature a high weight on measures of economic activity, sometimes even larger than the weight on inflation. Two main factors drive our result. First, stabilizing economic activity also stabilizes other welfare relevant variables. Second, the estimated model features mitigated inflation distortions due to a low elasticity of substitution between monopolistic goods and a low interest rate sensitivity of demand. The result holds up in the presence of measurement errors, with large shocks that generate a trade-off between stabilizing inflation and resource utilization, and also when ensuring a low probability of hitting the zero lower bound on interest rates.

1 Introduction

The formulation of objectives for central banks is a classical issue in economics. The natural premise when society assigns an objective for its central bank is that it should aggrandize welfare for its citizens. The recent global financial crisis and sovereign debt crisis in Europe, which caused central banks to slash interest rates and unleash large purchases of government bonds to prevent deflationary spirals and another great depression, have stimulated a vibrant discussion about the role of central banks in the economy and what their objectives should be. The purpose of this paper is to contribute to this important discussion.

In the 1970s and 1980s, variable and high rates of price inflation led many countries to delegate the conduct of monetary policy to “instrument-independent” central banks. Drawing on learned experiences, many societies gave their central banks a clear mandate to pursue price stability and instrument independence to achieve it.1 Advances in academic research, notably the seminal work of Rogoff (1985) and Walsh (1995), supported a strong focus on price stability as a means to enhance the independence and credibility of monetary policymakers. As discussed in further detail in Svensson (2010), an overwhelming majority of these central banks also adopted an explicit inflation target to further strengthen credibility and facilitate accountability. For example, the mandate of the European Central Bank is to maintain price-stability, without any explicit responsibility for economic activity.2

One exception to the reigning central banking practice is the U.S. Federal Reserve, which since 1977 has been assigned the so-called “dual mandate” which requires it to “promote maximum employment in a context of price stability”. Only as recently as January 2012, the Fed announced an explicit long-run inflation target but also made clear its intention to keep a balanced approach between mitigating deviations of both inflation and employment from target.

Our reading of the academic literature to date, perhaps most importantly the seminal work by Woodford (2003), is that a welfare-maximizing central bank should be assigned only a small weight to measures of economic activity in its objective.3 In the same model framework, Blanchard and Galí (2007) established that stabilizing inflation allows the central bank to simultaneously stabilize welfare-relevant measures of economic activity—a property known as the “divine coincidence”. Taken together, these findings suggest that the strong focus on inflation stabilization by prominent central banks like the ECB is sufficient for macroeconomic stabilization, and that the focus on resource utilization in the Fed’s mandate is redundant or even harmful.

But is a dual mandate really harmful, or could it in fact benefit societies? In this paper we revisit this issue. We argue that a high weight on standard measures of economic activity—e.g. the output gap—is beneficial for society as it serves as an overall proxy for other welfare relevant variables. To make this point, we start with an analysis of standard flexible inflation targeting mandates in the canonical New Keynesian sticky price and wage model of Erceg, Henderson and Levin (2000), EHL henceforth. This model is convenient for our purposes, because it allows us to derive some general analytical results for the output gap weight in a standard inflation-output gap loss function in a framework with both wage and price stickiness.

However, the analysis with the canonical model demonstrates that the merits of a dual mandate depend on the structure of the economy (distortions in goods and labor markets) as well as shocks creating a trade-off between inflation and output gap stabilization. We therefore complement this analysis with numerical simulations in an estimated medium-scale model. Specifically, we use the workhorse Smets and Wouters (2007) empirical model—SW henceforth—of the U.S. economy to examine how a simple objective for the central bank should be designed in order to approximate the welfare of households in the model economy as closely as possible. The SW model is very useful for our purposes, as it represents a prominent example of how the U.S. economy can be described by a system of dynamic equations consistent with optimizing behavior. As such, it should be less prone to the Lucas (1976) critique than other prominent studies on optimal monetary policy that are based on backward-looking models (see e.g. Rudebusch and Svensson, 1999, and Svensson, 1997).4 Moreover, many of the existing papers which use models based on optimizing behavior have often relied on simple calibrated models without capital formation.5 Even though policy recommendations are model consistent, their relevance may be questioned given the simplicity of these models and the fact that they have not been estimated.

By complementing our analytical results in the canonical EHL model with a normative analysis in an empirically realistic model, this paper provides both theoretically coherent and empirically relevant policy recommendations. We can, for instance, examine if the Federal Reserve’s strong focus on resource utilization improves households’ welfare relative to a simple mandate that focuses more heavily on inflation. Although the exact details of our results are based on U.S. data, we believe they are rather general and relevant for many other economies. Both VAR evidence (see e.g. Angeloni et al., 2003) and estimated New Keynesian models (see e.g. Christiano, Motto and Rostagno, 2010) suggest that the transmission of monetary policy, the structure of the economy, and shocks are very similar in the European economies.

The benchmark to which we compare the simple mandates is Ramsey policy. Even though it is optimal to implement the Ramsey policy directly, the overview of central banking mandates by Reis (2013) and Svensson (2010) shows that no advanced country has asked their central bank to implement such a policy for society. Instead, many central banks are mandated to pursue a simple objective that involves only a small number of economic variables.6 We believe there are several important reasons for society to assign the central bank a simple mandate. First, it would be for all practical purposes infeasible to describe the utility-based welfare criterion for an empirically plausible model, as it would include too many targets in terms of variances and covariances of different variables.7 Instead, a simple objective facilitates communication of policy actions with the public and makes the conduct of monetary policy more transparent. Second, a simple mandate also enhances accountability of the central bank, which is of key importance. Third and finally, prominent scholars like Svensson (2010) argue that a simple mandate is more robust to model and parameter uncertainty than a complicated state-contingent Ramsey policy.8

Our main findings are as follows. First, analytical results within the simple canonical sticky price-wage model of EHL shows that it is often optimal to place a high weight on resource utilization (output gap) in a simple mandate which omits nominal wage inflation and only includes price inflation and the output gap. In particular, this is the case if the sensitivity of the price and wage Phillips curves to the output gap both are of similar magnitude, an assumption which seems to hold empirically (see e.g. Christiano et al., 2010, and Smets and Wouters, 2007). This result is at odds with the conventional wisdom and is essentially driven by the fact that the output gap summarizes the welfare relevant frictions in the goods and labour markets.

Second, in line with the analytical results in the stylized model, our numerical analysis with the SW model shows that a positive weight on any of the typical resource utilization variables like the output gap, the level of output (as deviation from a linear trend), and the growth rate of output improves welfare significantly. And the results hold up when imposing realistic limitations on the extent to which monetary policy makers change policy interest rates. Moreover, among these measures, a suitably chosen weight on the model-consistent output gap delivers the lowest welfare loss. Specifically, we find that in a simple loss function—with the weight on annualized inflation normalized to unity—the optimized weight on the output gap is about 1. This value is considerably higher than the reference value of 0.048 derived in Woodford (2003) and the value of 0.25 assumed by Yellen (2012).9 The high weight on the output gap stems from several empirically relevant characteristics in the estimated model that reduce the importance of inflation relative to the output gap. These include a low elasticity of substitution between monopolistic goods, price indexation to lagged inflation by non-optimizing firms, and a low interest rate sensitivity of demand. In addition, the model features real rigidities which enables it to account jointly for the macroeconomic evidence of a low sensitivity of prices (wages) to marginal costs (labor wedge) and the microevidence of frequent price (wage) re-optimization (3-4 quarters). The moderate degree of price stickiness, in turn, makes inflation fluctuations less costly relative to output fluctuations. Our basic finding that the central bank should respond vigorously to resource utilization is consistent with the arguments in Reifschneider, Wascher and Wilcox (2013) and English, López-Salido and Tetlow (2013).

In the SW model, the chosen weight for the output gap has important implications for inflation volatility, as the model features a prominent inflation-output gap trade-off along the efficient frontier as defined in the seminal work of Taylor (1979) and Clarida, Galí and Gertler (1999). At first glance, this inflation-output gap trade-off may appear to be contradictory to Justiniano, Primiceri and Tambalotti (2013), who argue that there is no important trade-off between stabilizing inflation and the output gap. However, the different findings can be reconciled by recognizing that the key drivers behind the trade-off in the SW model—the price- and wage-markup shocks—are absent in the baseline model analyzed by Justiniano et al. (2013).10 While our reading of the literature is that considerable uncertainty remains about the role of these inefficient shocks as drivers of business cycle fluctuations, we want to stress that our results hold regardless. In particular, if inefficient shocks are irrelevant for business cycle fluctuations, then stabilizing inflation or output is approximately equivalent and attaching a high weight to output is still optimal. And as long as inefficient shocks do play some role—as in SW—then the high weight on output stabilization becomes imperative for the simple mandate to mimic Ramsey policy.

Our third important finding is that a large weight on the output gap remains optimal even when we assume that the gap is measured with significant errors in real time. When we calibrate the size and persistence of the measurement errors for the output gap following the work by Orphanides and Williams (2002), the optimal weight on the output gap drops only modestly from 1.04 to 0.96. In addition, even when the output gap is measured with significant errors in real time, it remains a better target variable than output as deviation from a linear trend or the GDP growth rate. So the common “measurement error” counterargument against targeting the output gap is without merit according to our analysis.

Fourth and finally, in line with Levin et al. (2005), we find that a loss function with nominal wage inflation and the hours gap provides a good, or even better approximation to the household true welfare function than a simple standard inflation-output gap based objective.11 As is the case with the inflation-output gap based simple objective, the hours gap—defined as the difference between actual and potential hours worked per capita—should be assigned a large weight in such a loss function. The reason why targeting labor market variables provides a better approximation of the Ramsey policy is that the labor market in the SW model features large nominal wage frictions and markup shocks, and it is equally or potentially even more important to correct these frictions in factor markets than to correct the distortions in the product markets (sticky prices and price markup shocks).

This paper proceeds as follows. We start in Section 2 by describing how to compute the Ramsey policy and to evaluate the alternative monetary policies. In Section 3, we present the analytical results with the EHL model. In Section 4, we turn to the numerical analysis with the SW model. The robustness of the results in the SW model along some key dimensions is subsequently discussed in Section 5. Finally, Section 6 provides some concluding remarks and suggestions for further research.

2 The Utility-Based Welfare Criterion

We start by deriving and discussing our utility-based welfare criterion. Rotemberg and Woodford (1998) showed that—under the assumption that the steady state satisfies certain efficiency conditions—the objective function of households can be transformed into a (purely) quadratic function using the first-order properties of the constraints. With this quadratic objective function, optimization subject to linearized constraints would be sufficient to obtain accurate results from a normative perspective. Some assumptions about efficiency were unpalatable as exemplified by the presence of positive subsidies that would make the steady state of the market equilibrium equivalent to that of the social planner.12 Therefore, many researchers—including Benigno and Woodford (2012)—extended the LQ transformation to a general setting without the presence of such subsidies. Benigno and Woodford (2012) demonstrated that the objective function of the households could be approximated by a (purely) quadratic form:


where Xt is a N × 1 vector with the model variables measured as their deviation from the steady state; therefore, XtWHXt is referred to as the quadratic approximation of the household utility function U(Xt). We discuss the second-order approximation of the utility function and the resource constraints around the distorted steady state in more detail in Appendix A.

Throghout the analysis, we assume that the central bank operates under commitment. We believe this is a good starting point for two reasons. First, the evidence in Debortoli, Maih and Nunes (2014) and Debortoli and Lakdawala (2016), suggests that the Federal Reserve operates with a high degree of commitment, at least before the zero lower bound became binding.13 Second, this assumption makes our analysis more comparable with the literature on simple interest rate rules, which also imply some form of central bank commitment.

We define Ramsey policy as a policy that maximizes (1) subject to the N − 1 constraints of the economy. While N is the number of variables, there are only N − 1 constraints provided by the SW model because the monetary policy rule is omitted. Unlike the efficient steady-state case of Rotemberg and Woodford (1998), second-order terms of the constraints do influence the construction of the WH matrix in (1), and as detailed in Appendix C, we made assumptions on the functional forms for the various adjustment functions (for example, the capital utilization rate, the investment adjustment cost function, and the Kimball aggregators) that are consistent with the linearized behavioral equations in SW.

Since the constant term in (1) depends only on the deterministic steady state of the model, which is invariant across different policies considered in this paper, the optimal policy implemented by a Ramsey planner can be solved as


where the minimization is subject to the N − 1 constraints in the economy, which are omitted for brevity. Following Marcet and Marimon (2012), the Lagrange multipliers associated with the constraints become state variables. Accordingly X˜t[Xt,ϖt] now includes the Lagrange multipliers ϖt as well. For expositional ease, we denote these laws of motion more compactly as X˜t*(WH).

Using (1) to evaluate welfare would require taking a stance on the initial conditions. Doing so is particularly challenging when Lagrange multipliers are part of the vector of state variables because these are not readily interpretable. We therefore adopt the unconditional expectations operator as a basis for welfare evaluation.14 The loss under Ramsey optimal policy is then defined by


Our choice of an unconditional expectation as the welfare measure is standard in the literature (see for instance Woodford, 2003). Furthermore, when the discount factor is close to unity—as is the case in our calibration—unconditional and conditional welfare are also quite similar.15

The Ramsey policy is a useful benchmark. Obviously, in theory a society could design a mandate equal to the Ramsey objective (1). But in practice societies do not; instead, central banks are typically subject to a mandate involving only a few variables. To capture this observation, we assume that a society provides the central bank with a loss function


where WCB is a sparse matrix with only a few non-zero entries. The matrix WCB summarizes the simple mandates and will be specified in detail in our analysis. Given a simple mandate, the optimal behavior of the central bank is


When the simple mandate does not coincide with the Ramsey policy, we have that WCBWH and therefore that X˜t*(WCB)X˜t*(WH).16 To compute the extent to which the simple mandate of the central bank approximates optimal policy, one can calculate its associated loss according to the formula:


The welfare performance of the simple mandate is then found by taking the difference between LossCB in eq. (6) and LossR in eq. (3). In our presentation of the results, we express this welfare difference in consumption equivalent variation (CEV) units as follows:


where C¯(UC|s.s.) can be interpreted as how much welfare increases when consumption in the steady state is increased by one percent. That is, CEV represents the percentage point increase in households’ consumption, in every period and state of the world, that makes them in expectation equally well-off under the simple mandate as they would be under Ramsey policy.17 Moreover, (7) makes it clear that our choice to neglect the policy-invariant constant in (1) when deriving the Ramsey policy in (2) is immaterial for the results in our paper since all alternative policies are evaluated as difference from the loss under Ramsey.

So far we have proceeded under the assumption that the law governing the behavior of the central bank specifies both the variables and the weights in the quadratic objective, i.e. WCB in (4). But in practice, the mandates of central banks are only indicative and not entirely specific on the weights that should be attached to each of the target variables. A straightforward way to model this is to assume that society designs a law Ω that constrains the weights on some variables to be equal to zero, without imposing any restriction on the exact weight to be assigned to the remaining variables. When determining the simple mandate consistent with the law Ω, we assume the central bank is benevolent and selects a weighting matrix, WCB*, which minimizes the expected loss of the society. Formally,

WCB*=arg minEWΩ[(Xt*(W))WH(Xt*(W))],(8)

where the weighting matrix WH is defined by (1).

To sum up, our methodology can examine the performance of simple mandates that central banks are typically assigned with. This statement is true whether the simple mandate specifies both the target variables and the exact weights, or whether the target variables are specified but the weights are loosely defined. In this latter case, our exercise can inform central banks of the optimal weights, and ultimately society about whether bounds on certain weights should be relaxed or not. Finally, it should be noted that our approach described above is similar in spirit to the extensive literature that has studied how simple interest rate rules should be designed to mimic optimal policy as closely as possible; see for example Juillard et al. (2006), Levin et al. (2005), Kim and Henderson (2005), and Schmitt-Grohé and Uribe (2007). The key difference with this literature is that we focus on simple mandates, and the variables that should be included in the simple mandate are not necessarily those that make simple rules mimic the Ramsey policy.

3 Analytical Results in a Canonical New Keynesian Model

This section considers the canonical sticky-price and sticky-wage model with fixed capital by Erceg, Henderson and Levin (2000) to build intuition for the analysis with the workhorse SW model with endogenous capital. The key insight is that stabilizing the output gap also helps stabilize additional welfare relevant variables. For this reason, it is desirable to attach a significant weight to the output gap in simple mandates that do not include all the welfare relevant targets. We show analytically that under certain conditions the weight on the output gap should be infinite. More generally, we derive an approximate expression for the weight on the output gap, which can be easily calculated using a few simple statistics.

The EHL model is characterized by the equations


Eqs. (9) and (10) are the New-Keynesian Phillips curves describing the evolution of price inflation (πtp) and wage inflation (πtω) as a function of the output gap ytgap, and the real wage gap (ωtgap). The latter variable, defined in eq. (11), measures the deviation of the actual real wage ωt from its frictionless counterpart (ωtn). β denotes the discount factor, and the composite parameters κp and ϑp (κω and ϑω) are both inversely related to the probability of the firm (household) not being able to re-optimize its price (nominal wage), implying that their values fall when the degree of price (wage) stickiness increases.18

The quadratic approximation to the household utility around a non-distorted steady state is given by


where λwoptω(1α)pϑpϑw and λyopt(σc+σl+α1α)ϑpp denote the weights on wage inflation and output

gap relative to price inflation, whereas the parameters p and ω are the elasticities of substitution between goods and labour varieties, respectively, whereas σc denotes the intertemporal elasticity of substitution of consumption, σl the inverse of the Frisch elasticity of labour supply, and α is the (fixed) capital income share.

For our purposes, we consider that the central bank is assigned the following simple mandate,


which does not include one of the target variables in the social loss function LtR, namely wage inflation.19 Next, we study how to select the appropriate weight λy so that the actual central bank policy under this simple but suboptimal mandate is as close as possible to the optimal policy (i.e. minimize LtCBLtR).

A critical feature of this economy is that it is not possible to simultaneously stabilize the output gap and the two inflation rates. For example, in response to changes in the natural real-wage—e.g. due to changes in productivity—perfectly stabilizing the output gap requires a change in the real wage, and thus a change in either prices or nominal wages (or both). As a result, as it can be seen from eqs. (9)-(11), it is not feasible to achieve simultaneously ytgap=0,πtp=0, and πtw=0.

Nevertheless, combining eqs. (9) and (10) gives that the composite inflation index ϑwπtp+ϑpπtω evolves according to


This equation implies that perfectly stabilizing the output gap leads to perfect stabilization of the composite inflation index ϑwπtp+ϑpπtω, where a higher weight is attached to the inflation rate of the sector of the economy where nominal rigidities are more severe. Thus, stabilizing the output gap also mitigates the costs of nominal rigidities both in the goods and in the labor markets.

In what follows, we study under which circumstances such a policy is actually desirable. Because a complete analytical solution for the optimal simple mandate is infeasible, we present our results in two exercises. First, we solve the dynamic model in a case with equal slope of the price and wage Phillips curves. Second, we solve a static version of the model, with arbitrary slopes of the two Phillips curves.

3.1 A Dynamic Model with Equal Slope of Price and Wage Phillips Curves

Let’s first consider a benchmark case of equal slope of the price and wage Phillips curves, that is κp = κωκ. According to the findings in Smets and Wouters (2007) for the U.S. and Christiano, Motto and Rostagno (2010) for the euro area (and the U.S.), this case is arguably empirically relevant and has the virtue that the model admits an analytical solution with λwopt=ϑp/ϑw. As shown in Appendix B, the optimal Ramsey policy can in this case be described by the targeting rule


which combined with eq. (14) implies that in equilibrium ytgap=0 and ϑwπtp+ϑpπtω=0 in all periods t ≥ 0.20

The intuition for this result is as follows. In principle, tolerating some output gap may require smaller adjustments of price and wages, and thus reduce the costs associated with nominal rigidities. However, as it can be seen from eqs. (9) and (10), when the output gap is fully stabilized, price and wage inflation move in opposite directions, and their relative movements equals −ϑp/ϑω. If this ratio coincides with the weight on the variance of nominal wages in the loss function, λwopt, there is no incentive to change the relative volatility of the two inflation rates. In addition, since κp = κω = κ, a unitary change in the output gap changes the two inflation rates by the same amount κ. Even though the volatility of one of the inflation rates may decrease, the welfare costs of nominal rigidities—(πtp)2+λwopt(πtw)2—would necessarily increase. As a result, the central bank does not have any incentive to allow for fluctuations in the output gap, and strict output gap targeting is optimal.

This reasoning and the conditions in eqs. (14) and (15) allow us to derive analytically the value of λy that maximizes households’ welfare in the simple mandate given by eq. (13). When doing so, we find that it is optimal to assign an infinite weight to output gap stabilization, i.e. λy = ∞. Moreover, it turns out that the simple mandate in this case also replicates the optimal policy, so LtR=LtCB in equations (12) and (13).21 Any other weight λy in the simple mandate implies a welfare loss for households. For instance, there is a welfare loss if the central bank exclusively focuses on price stability or assigns a low/negligible weight on the output gap.

3.2 A Static Model with Arbitrary Slopes of Price and Wage Phillips Curve

When the sensitivity of price and wage inflation to the output gap differs, full stabilization of the output gap is generally not optimal. An analytical expression for the optimal weight λy is not available in such a general case. However, it is still possible to get some insights about the factors affecting the magnitude of λy within a static version of the model.

In particular, suppose there is only one period (t = 0), and there is no uncertainty. Also, assume the initial conditions are ω1=ω1n=0, and the terminal conditions are π1p=π1w=0. Since the economy is back to steady state in period 1, there is no scope for managing inflation expectations and hence there is no distiction between commitment and discretionary policies.

Under these assumptions, eq. (11) can be used to substitute for ωt, so that eqs. (9) and (10) simplify to


where κ˜pϑpκw+κp(1+ϑw)1+ϑp+ϑw,ϑ˜pϑp1+ϑp+ϑw,κ˜wϑwκp+κw(1+ϑp)1+ϑp+ϑw, and ϑ˜wϑw1+ϑp+ϑw. Minimizing (12) subject to the latter two equations implies that under the optimal Ramsey policy




In this case, it is easy to show that a central bank which follows the simple mandate (13) implements the optimal equilibrium if (and only if) the weight on the output gap


The last equation indicates that an approximate measure for λy could be inferred from simple statistics. In particular, λy is inversely related to the parameter ψopt in eq. (19) which determines the volatility of output gap according to eq. (18) under the optimal policy. ψopt, in turn, crucially depends on the differences between the parameters of the price inflation and wage inflation Phillips curve, i.e. κ˜pϑ˜pκ˜wϑ˜wλwopt as can be seen from eq. (18).22 Intuitively, in economies where wage and price inflation have similar impacts on the real activity, stabilizing the output gap helps achieving the optimal balance between the volatility of the two inflation rates. On the contrary, in economies where prices are much more rigid than wages (or where the price elasticity of output demand is higher than the wage elasticity of labour demand)—so that ϑ˜pκ˜p is low, the optimal weight on the output gap should be low.

For instance, under the baseline calibration in Galí (2008), where wages are more rigid than prices, the parameters of the Phillips curve are κ˜p = 0.02 and ϑ˜p = 0.04, while κ˜w = 0.03 and ϑ˜w = 0.01. Those values implies that ψopt = 0.0021, and that the output gap should receive a weight that is about 6.5 times the weight on (annualized) inflation—arguably a much larger weight than under the conventional wisdom. For the full dynamic model we find that the optimal weight under commitment is even higher (38.5). Hence, the simplifying assumptions we made in order to deduce an analytical solution are not the driver of the high weight on the output gap.

3.3 Additional Considerations

In more complex models, with several welfare relevant targets, it is often not possible to replicate the optimal policy by following a simple mandate. Nevertheless, the basic result that targeting the output gap helps to stabilize additional welfare-relevant variables and is therefore welfare enhancing to do so remains valid. Consider for instance the standard NK model with sticky prices and partial indexation (ιp) to past inflation for the non-optimizing firms. In this case it is well known that the true welfare loss function is given by LR=(πtιpπt1)2+λyopt(ytgap)2. However, suppose now that following common practice the central bank does not target the quasi-difference in inflation, but simply just inflation πt. In this case, it can easily be shown that the Ramsey policy is replicated only when λy = ∞ in the simple mandate (13). The intuition for this result is that even though the central bank is not targeting the welfare correct quasi-change in inflation (πtιpπt−1), the central bank effectively stabilizes the correct inflation variable by stabilizing the output gap.

Similar findings arise in models in which production sectors are heterogeneous in the degree of price stickiness or in the elasticities of substitution across various goods. For instance, in the model of Aoki (2001), if the central bank targets headline rather than core inflation, then stabilizing the output gap in the simple mandate is optimal.23 Moreover, Bodenstein, Erceg and Guerrieri (2008) and Natal (2012) argue that energy price fluctuations is yet another reason why a large weight on the output gap approximates optimal policy well.

Notably, Woodford (2003) acknowledges that output gap stabilization can deliver results very close to welfare optimal policies and has the advantage of producing very robust results under different calibrations. Erceg, Henderson and Levin (2000) also advocated the robustness and efficiency of output gap stabilization in the context of simple rules. However, even if our and their analyses have shown that there are several convincing theoretical arguments why the output gap deserves a large weight in simple mandates, there are also some key arguments not considered thus far that may limit the desirability of stabilizing measures of economic activity. One of them is the presence of inefficient price- and wage-markup shocks. As is shown in Appendix B, the introduction of shocks which creates a substantial trade-off between stabilizing inflation and the output gap makes it non-optimal to fully stabilize output gap fluctuations because doing so will create unwarranted excessive movements in price and wage inflation. An additional important consideration is the presence of measurement errors, which also may limit considerably the benefits of targeting the output gap. These and other issues are explicitly analyzed next in the context of the estimated workhorse SW model.

4 Quantitative Analysis

We now turn to the quantitative analysis within the workhorse model of Smets and Wouters (2007), which is outlined in greater detail in Appendix C. The model includes monopolistic competition in the goods and labor market and nominal frictions in the form of sticky price and wage settings, while allowing for dynamic inflation indexation. It also features several real rigidities: habit formation in consumption, investment adjustment costs, variable capital utilization, and fixed costs in production. The model dynamics are driven by six structural shocks: the two inefficient shocks—a price-markup shock and a wage-markup shock—follow an ARMA(1,1) process, while the remaining four shocks (total factor productivity, risk premium, investment-specific technology, and government spending shocks) follow an AR(1) process. All the shocks are assumed to be uncorrelated, with the exception of a positive correlation between government spending and productivity shocks, i.e. Corr(etg,eta)=ρag>0.

Our main departure from the SW model concerns the behavior of the central bank. Rather than considering a Taylor-type interest rate rule and the associated monetary policy shock, we assume that the central bank sets its policies optimally, to best achieve its mandate—i.e. it minimizes its loss function subject to the structural constraints. The model parameters are fixed at the posterior mode of the SW original estimates.24 An alternative approach would be to allow for both parameter and model uncertainty (see e.g. Walsh, 2005). However, we believe it is instructive to start out by performing our exercise in a specific model, under specific parameter values. Throughout the analysis, we discuss the sensitivity of our results to alternative parameterizations.

4.1 Benchmark Results

Table 1 reports our benchmark results. The benchmark simple mandate we consider reflects the standard practice of monetary policy, and is what Svensson (2010) refers to as “flexible inflation targeting.” Specifically, we use the framework in Woodford (2003) and assume that the simple mandate can be captured by the following period loss function

Table 1:

Benchmark Results for “Flexible Inflation Targeting” Mandate in eq. (21).

article image
Note: CEV denotes the consumption equivalent variation (in percentage points) needed to make households indifferent between the Ramsey policy and the simple mandate under consideration according to eq. (7). The “Dual Mandate” refers to a weight of unity for the unemployment gap in the loss function (21), which translates into λa = 0.25 when applying a variant of Okun’s law. Finally, “Optimized Weight” refers to minimization of eq. (6) w.r.t. λa in eq. (21).

where πta denotes the annualized rate of quarterly inflation and xt is a measure of economic activity with λa denoting its corresponding weight.

We consider three different measures of economic activity. Our first measure is the model-consistent output gap,


i.e. the difference between actual and potential output where the latter is defined as the level of output that would prevail if prices and wages were fully flexible and inefficient markup shocks were excluded.25 The second measure we consider is simply the level of output (as deviation from the deterministic labor-augmented trend, i.e. yty¯t) Finally, we also consider annualized output growth in the spirit of the work on “speed-limit” policies by Walsh (2003).

The first two rows of Table 1 contain a comparison between two benchmark values of λa. In the first row of Table 1 we set λa = 0.048, corresponding to the welfare-maximizing weight on output-gap in Woodford (2003).26 The second row of Table 1 examines instead the dual mandate. In a recent speech, Yellen (2012) describes the dual mandate through a simple loss function that assigns equal weights for annualized inflation and the unemployment gap (i.e. actual unemployment minus the NAIRU).27 In addition, Yellen stipulates that the Federal Reserve converts the unemployment gap into an output gap according to a value of roughly 0.5—and such a value is based on the widely spread empirical specification of the Okun’s law ututpot=(ytytpot)/2. Accordingly, the unit weight on the unemployment gap converts into a weight of λa = 0.25 on the output gap.28

As we can see from the second row in Table 1, increasing the weight on real activity from the value of Woodford (2003) to the value consistent with the dual mandate significantly reduces welfare losses, namely by a factor of three for our benchmark measure of economic activity (the output gap), and by about a factor of two for alternative measures (output level and output growth). In all cases, the welfare gains are large compared to similar studies in the monetary policy literature—e.g. larger than the threshold value of 0.05% used by Schmitt-Grohe and Uribe (2007).

The last row in Table 1 displays the results when the weight λa is optimized. The key finding is that the optimal value of λa is much higher than the values considered so far, for all measures of economic activity. For example, the optimized coefficient for the output gap is 1.042. Coincidentally, this is very similar to the unit weight on the unemployment gap as used in Yellen (2012). For the level of output (as deviation from trend), the optimized coefficient is lower (0.5) but still twice as high as implied by the dual mandate. In the case of output growth, the optimized coefficient is even higher (around 2.9), which essentially is a so-called speed-limit regime (see Walsh, 2003). Notably, our analysis shows that adopting a simple mandate with a high weight on any of the resource utilization measures improves welfare with respect to considering the model-based output gap but assigning to it a low weight—e.g. as in Woodford (2003).29 This is since assigning a high weight to detrended output or output growth in the loss function helps reducing considerably the volatility of output gap, albeit not to the same extent as when targeting it directly.

To gauge the sensitivity of the CEV with respect to the weight assigned to resource utilization, Figure 1 plots the CEV as a function of λa for the three resource measures. Consistent with the results in Table 1, we see that there is quite some curvature of the CEV function for small values of λa for all three measures. Moreover, for the output gap we see that values of λa between 0.5 and 1.5 perform about equally well, whereas the mandate with detrended output has a higher curvature near the optimum. For output growth, the figure shows that any value above unity yields virtually the same CEV.

Figure 1:
Figure 1:

Consumption Equivalent Variation (percentage points) as Function of the Weight (λa) on Economic Activity.

Citation: IMF Working Papers 2017, 164; 10.5089/9781484309278.001.A001

Note: The figure plots the CEV (in %) for the simple mandate with inflation and: output gap (left panel), output level (middle panel), output growth (right panel) The coordinate with an ‘x’ mark shows the CEV for λa = 0.01, the ‘o’ mark shows the CEV for the optimized weight.

To clarify the mechanism behind our results, we follow Taylor (1979), Erceg, Henderson and Levin (1998), and Clarida et al. (1999) and study the main trade-offs involved in stabilizing measures of inflation vs. measures of economic activity through variance frontiers. Figure 2 plots the variance of price or wage inflation (horizontal axis), together with measures of economic activity (vertical axis), while letting the weight λa vary from a small (0.01) to a large value (5.00). The slope of the resulting curve is referred to as the trade-off between the two variances. The upper panels refer to the benchmark loss function with price inflation and output gap. Panel A shows that there is a clear trade-off between stabilizing price inflation and the output gap. Indeed, a lower volatility of output gap is always associated with an increase in the volatility of price inflation. Instead, Panel B shows that there is not necessarily a trade-off between stabilizing output gap and wage inflation. For example, as long as λa < 0.1, reducing the volatility of output gap also reduces the volatility of wage inflation. In other words, and consistently with our theoretical results of Section 3, Figure 2 shows that increasing the weight λa up to a value of 0.1 helps stabilizing not only the output gap, but also wage inflation—i.e. a welfare relevant variable not explicitly targeted by the central bank in its loss function. In fact, the volatility of nominal wage inflation remains lower relative to a benchmark strict inflation targeting loss function (λa = 0.01) for values of λa up to 0.4 (not shown in Panel B). This explains why in this economy measures of economic activity should receive a relatively high weight in a central bank’s simple mandate that does not include all the welfare relevant targets.

Figure 2:
Figure 2:

Variance Frontier for Alternative Resource Utilization Measures.

Citation: IMF Working Papers 2017, 164; 10.5089/9781484309278.001.A001

Note: The figure plots the variance frontier for the simple mandate with inflation and: output gap (Panel A), output level (Panel C), output growth (Panel D). Panel B shows the variance combination of the output gap and the annualized nominal wage inflation when varying λa for the price inflation-output gap loss function (i.e. same loss function as in Panel A). The coordinate with an ‘×’ mark shows the volatility for λa = 0.01, the ‘o’ mark shows the volatility for the optimized weight, and the ‘+’ mark shows the volatility for λa = 5.

The lower panels of Figure 2 plot variance frontiers when the measure of economic activity is given by the output level and output growth both in the loss function and the frontier itself. Panel D in the figure shows that the trade-off between stabilizing inflation and economic activity is most favorable when the resource utilization measure is output growth; the variance of annualized output growth can be reduced to nearly 1 percent without Var(πta) increasing by much. Moreover, the flatness of the CEV witnessed in the right panel of Figure 1 for values of λa higher than optimal can be readily explained by the fact that panel D in Figure 2 shows that such values induce only small changes in the volatilities of inflation and output growth. For detrended output shown in panel C, the figure shows that the trade-off is most pronounced. Accordingly, values of λa higher than optimal translate into a higher curvature of the CEV function in Figure 1.

As noted in Section 2, a strength of the methodology used in this paper is that it can handle a non-efficient steady state. The results in Table 1 and Figure 1, however, are robust to allowing for subsidies to undo the steady-state distortions stemming from the presence of external habits, as well as firms’ and households’ monopoly power in price and wage setting. For detrended output and the output gap, the optimized weights are even larger when considering the efficient steady state; for example, λa equals 2.34 with an associated CEV of 0.0119 for the output gap when the steady state is efficient. For output growth, however, the optimized λa is notably lower (0.43). But given the flatness of the CEV function in Figure 1, it is not surprising that the exact weight for output growth can be somewhat sensitive to the specific assumptions. Even so, the optimized weight remains relatively large, reflecting the larger curvature for smaller values of λa. In principle, moving from a distorted to an efficient steady state could make a big difference when we consider a model with relatively large distortions in both goods and labor markets. However, in our model, the surge in steady state output when removing these distortions are to a large extent offset by removing external habit formation, so the efficient steady state level for output is only about 6 percent higher than our distorted steady state.30

4.2 The Importance of Real Activity

The key message from Table 1 is that the rationale for including some measure of real activity in the central bank’s objective is much more important than previously recognized either in policy circles or in previous influential academic work (e.g. Woodford (2003) and Walsh (2005)). By perturbing some of the parameters and shocks in the model, we seek to nail down why the model suggests that a high weight on real economic volatility improves household welfare.

We begin the analysis by using the SW parameters in Table A.1 to recompute λa according to the analytic formula provided in Woodford (2003):


where κx is the coefficient for the output gap in the linearized pricing schedule (i.e. in the New Keynesian Phillips curve), and ϕpϕp1 is the elasticity of demand of intermediate goods. In the SW model, the NKPC is given by


However, because the SW model features endogenous capital and sticky wages, there is no simple mapping between the output gap and real marginal costs within the fully fledged model. But by dropping capital and the assumption of nominal wage stickiness, we can derive a value of κx = 0.143 in the simplified SW model.31 From the estimated average markup ϕp, we then compute λa = 0.87. This value is considerably higher than Woodford’s (2003) value of 0.048 mainly for four reasons. First, the estimated gross markup in SW (1.61) implies a substantially lower substitution elasticity (φpφp1=2.64) compared to Woodford’s value (7.88). If we replace Woodford’s value with the one estimated by SW, λa in eq. (23) rises to 0.30. Second, if we replace Woodford’s value of the intertemporal substitution elasticity (6.25) with the value estimated by SW (1.39), λa increases further to 0.59. Third, if we relax the assumption of firm-specific labor (the Yeoman-farmer model of Rotemberg and Woodford, 1997), we have that λa equals 0.80. The remaining small difference to the SW value (0.87) can largely be explained by the slightly higher degree of price stickiness in Woodford’s calibration.

Fourth and finally, real rigidities in the form of the Kimball aggregators for prices and wages play an important role as they enable the SW model to fit the macroevidence of a low sensitivity of price (wage) inflation to marginal costs (labor wedge) while at the same time be consistent with the microevidence suggesting frequent price (and wage) re-optimization every 3-4 quarters (see e.g. Klenow and Malin, 2010, and Nakamura and Steinsson, 2013). Had the estimated Smets and Wouters (2007) model not included this feature, the price stickiness parameter would have been considerably higher (about 0.9 as in Smets and Wouters, 2003), and the optimal weight on the output gap considerably lower (about 0.05 according to eq. 23). But again, such a high degree of price stickiness is at odds with the microevidence, and the very reason why the SW model features Kimball aggregators.

It is important to recognize that this analysis is only suggestive as it omits some of the key features—wage stickiness and endogenous capital—in the fully fledged model. The analysis in Section 3 demonstrated that the optimal λa depends on the relative degree of structural frictions in goods and labor markets. As a consequence, the obtained λa above will only partially reflect the true structure of the fully fledged SW model. Even so, the analysis suggests that a large part of the gap between Woodford’s (2003) value and our benchmark finding of λa = 1.042 in the output gap case stems from differences in household preferences (intertemporal substitution elasticity of consumption and Yeoman-Farmer assumption) and the estimated substitution elasticity between intermediate goods.

With these results in mind, we turn to exploring the mechanisms within the context of the fully fledged model. Our approach is to turn off or reduce some of the frictions and shocks featured in the model one at a time to isolate the drivers of the results. The findings are provided in Table 2. The first row restates the baseline results with the optimized weight. The second row presents the optimized weight on the real-activity term when dynamic indexation in price- and wage-setting is shut down, i.e. ιp and ιw are calibrated to zero. All the other parameters of the model are kept unchanged. As can be seen from the table, the calibration without indexation lowers the optimized weight for the output gap to roughly 0.3—about a third of the benchmark value. In the other columns where real activities are captured by the level and the growth rate of detrended output, the optimized weights are also found to be about a third of the benchmark values.

Table 2:

Perturbations of the Benchmark Model.

article image
Note: “No Indexation” refers to setting ιp = ιw = 0; “No ɛtp(ɛtw) Shocks” refers to setting the variance of the price markup shock (wage markup shock) to zero; “Small ɛtw and ɛtp Shocks” means that the std. of these shocks are set to a 1/3 of their baseline values; and “No ɛtw and ɛtp Shocks” refers to setting the variance of both shocks to zero. “Large”means that the optimized value is equal or greater than 5.

To understand why indexation makes the real-activity term much more important than in a model without indexation, it is instructive to consider the simple New Keynesian model with indexation and sticky prices only briefly discussed in Section 3.3. As shown by Woodford (2003), this model generates the following approximated welfare-based loss function,


where ιp is the indexation parameter in the pricing equation. Suppose further, for simplicity, that inflation dynamics in equilibrium can be represented by an AR(1) process πt = ρπt–1+εt. In this simple setup, the welfare metric can be expressed as


where the error term εt is absorbed into terms independent of policy. In more empirically relevant models like SW, inflation persistence (ρ) is in large part explained by the indexation parameters (ιp and, in our sticky-wage framework, ιw matter as well). Therefore, these two parameter values tend to be similar and the coefficient on the inflation term is accordingly smaller. Hence, in a loss function like ours (eq. 21) where the inflation coefficient is normalized to unity, the coefficient on real activity tends to become relatively larger—as evidenced in Table 1. Intuitively, in economies where prices have a component indexed to their lags, the distortions arising from inflation are not as severe. Consequently, there is less need to stabilize inflation.

Notably, even when we remove indexation to lagged inflation in price and wage settings, the optimal value of λa still suggests a very large role for targeting economic activity; in fact, the optimal value is still slightly higher than the value implied by the dual mandate.32 Moreover, one can observe from Figure 3 that dropping dynamic indexation is associated with a rather sharp deterioration in the CEV when λa is below 0.2. This finding suggests that a vigorous response to economic activity is indeed important even without indexation. This is a reassuring result. Even though SW showed that excluding indexation to lagged inflation in price and wage setting is associated with a deterioration in the empirical fit (i.e. reduction in marginal likelihood) of the model, there is only weak support of indexation in micro-data.

Figure 3:
Figure 3:

CEV (in percentage points) as Function of λa for Alternative Calibrations.

Citation: IMF Working Papers 2017, 164; 10.5089/9781484309278.001.A001

Note: The figure plots the CEV (in %) as a function of λa for three different calibrations. The solid line refers to the benchmark calibration. The dotted line refers to the calibration in which ιp = ιw = 0. The dashed line refers to the calibration in which var(ɛtw)=var(ɛtp)=0.

Rows 3–6 in Table 2 examine the role of the inefficient markup shocks in the model. By comparing the CEV results in the third and fourth rows, we see that the wage markup shock contributes the most to the welfare costs of the simple mandate. But the key point is that even when one of these shocks is taken out of the model, the central bank should still respond vigorously to economic activity in order to maximize household welfare. Only when the standard deviations of both shocks are reduced or taken out completely (rows 5 and 6), λa falls for output and output growth. For the loss function with the model-consistent output gap, the weight λa is large when shocks are reduced (row 5), and is even larger when the standard deviations of both inefficient shocks are set to nil (row 6).

When both shocks are set to nil, any λa > 0.1 produces roughly the same CEV of about 0.016 although a λa ≥ 5 generates the lowest welfare loss relative to Ramsey as can be seen from Figure 3. This finding is supported by our analytical results in Section 3, which established that the weight on the output gap should be very high in a simple mandate like eq. (21) when the distortions in goods and labor markets are of similar magnitude. Even so, the flatness of the CEV as a function of λa in Figure 3 shows that there is only a weak trade-off between inflation and output gap stabilization in the absence of price- and wage-markup shocks. This suggests that the divine coincidence property holds approximately in this case, implying that the weight on the output gap is largely inconsequential. As an alternative to cutting the size of the markup shocks, we reduced the steady steady state gross markups from 1.61 (ϕp) and 1.5 (ϕω), respectively, to 1.20 following the evidence in Christiano, Eichenbaum and Evans (2005). Also under this parametrization, we find that a large weight on economic activity is optimal. For instance, the optimized λa for the output gap equals 1.01.

In Figure 4, we depict variance frontiers when varying λa from 0.01 to 5 for alternative calibrations of the model. We also include the implied {Var (πta),Var (ytgap)} combinations under Ramsey policy and the estimated SW policy rule with all shocks (marked by black ‘x’ and ‘+’ marks, respectively) and without the inefficient shocks (the blue ‘x’ and ‘+’ marks). As expected, we find that both the estimated SW rule and the Ramsey policy are outside the variance frontier associated with the simple mandate (solid black line), but the locus of {Var (πta),Var (ytgap)} for the optimized λa is very close to the Ramsey policy. We interpret this finding as providing a strong indication that the simple mandate approximates the Ramsey policy well in terms of the actual paths of output gap and inflation, and not just in terms of CEV as seen from the results for the output gap in Table 1.33,34 The locus of the estimated SW rule is a bit further away from the optimized simple mandate with λa = 1.042, and is associated with a higher (lower) variance of the output gap (price inflation).

Figure 4:
Figure 4:

Variance Frontiers for Alternative Calibrations.

Citation: IMF Working Papers 2017, 164; 10.5089/9781484309278.001.A001

Note: The figure plots the variance frontier for several calibrations: benchmark (solid line), var(ɛtp)=0 (dotted line), var(ɛtw)=0 (dashed-dotted line), and var(ɛtw)=var(ɛtp)=0 (dashed line). The ‘o’ mark shows the volatility for the optimized weight and benchmark calibration. The coordinates with an ‘x’ and the ‘+’ mark denote the Ramsey and SW policy rule, respectively. The box in the graph zooms in the case with var(ɛtw)=var(ɛtp)=0.

Turning to the role of the markups shocks, we see that the trade-off between inflation and output gap volatility is reduced notably but still remains sizeable when we set the standard deviation of the wage markup shocks to nil (dash-dotted green line) following the baseline model of Justiniano et al. (2013). The reason that the central bank has to accept a higher degree of inflation volatility in order to reduce output gap volatility in this case is that we still have the price markup shock active in the model. The price markups, however, are less important for the trade-off as can be seen from the red-dotted line, which demonstrates a notable trade-off with only wage markup shocks in the model. Only when both the inefficient shocks are excluded, the trade-off is relatively small (dashed blue line in Figure 4, shown in more detail in the small inset box).

Since substantial uncertainty remains about the importance of markup shocks over the business cycle, it is important to consider the likely case where at least a small proportion of the observed variation in inflation and wages is in fact driven by inefficient price- and wage-markup shocks. The fifth row in Table 2 reports results in which the standard deviations of both the inefficient shocks have been set to a third of their baseline values. For the wage-markup shock, this alternative calibration can be motivated by the empirical work by Galí, Smets and Wouters (2011), who can distinguish between labor supply and wage markup shocks by including the unemployment rate as an observable when estimating a model similar to the SW model. For the price markup shock, our choice is more arbitrary and follows Justiniano et al. (2013) by assuming that almost 90 percent of the markup shock variances are in fact variations in the inflation target.35

Even in this case, the table shows that the resulting λa is still high for the output gap. The reason is that if all shocks are efficient then a high λa is still optimal (recall Figure 3), and if some shocks are indeed inefficient then a high λa is required. Therefore, a high weight λa is a robust choice if there is uncertainty about the inefficiency of the shocks; a high weight is optimal but not economically important when all shocks are efficient, and optimal and economically meaningful as long as a small proportion of the shocks is indeed inefficient.

This analysis clarifies that our results and those of Justiniano et al. (2013) are not in contradiction. First, note that the analysis is not directly comparable because their no trade-off result refers to a shift from the estimated historical rule to Ramsey policy, while our focus is on different weights in the objective function. Second, and as just explained, as long as there is uncertainty about the inefficiency of the shocks and the presence of a trade-off, a high weight λa is a robust choice. In the next section, we show that this basic result holds up for some key modifications of the analysis.

5 Robustness Analysis

In this section, we explore the robustness of our results along some key dimensions. First, we examine to what extent adding labor market variables, such as hours worked and wage inflation, to the loss function improves welfare. Second, we consider the extent to which the implied interest rate movements for the simple mandates under consideration are reasonable, and if our results hold up when augmenting the loss function with interest rate term. Third and finally, we examine the robustness of the high output gap weight when assuming that the gap is measured with considerable errors in real time.36

5.1 Should Labor Market Variables be considered?

One of the reasons for the popularity of inflation targeting comes from the results in the New Keynesian literature—importantly Clarida et al. (1999) and Woodford (2003)—that inflation in the general price level is costly to the economy. The old Keynesian literature, however, emphasized the importance of wage inflation.37 Recent influential theoretical papers support that literature by suggesting to add wage inflation as an additional target variable in the loss function, see e.g. Galí (2011) and our previous analysis of the EHL model in Section 3.1. In the SW model employed in our analysis, both nominal wages and prices are sticky. It is therefore conceivable that wage inflation may be equally or even more important to stabilize than price inflation. In addition to studying nominal wage inflation, it is of interest to examine to what extent other labor market variables like employment or hours worked can substitute for overall economic activity within the model. Hence, we propose to study the following augmented loss function:


where Δwta denotes annualized nominal wage inflation (and Δwa its steady state rate of growth), and et involves a measure of activity in the labor market.

In Table 3, we report results for this augmented loss function (27) when xt is given by the output gap and et is given by the hours worked per capita gap ltgap, respectively. The labor market gap, defined as ltgap=ltltpot, differs from the output gap because of the presence of capital in the production function. The first row re-states the benchmark results, i.e. with the optimized weight on ytgap in Table 1. The second row adds wage inflation to the loss function. Relative to the unit weight on inflation, the optimized objective function would ask for a weight of roughly 3.2 for the output gap term, and a weight of about 1.5 for nominal wage inflation volatility, which is higher than the normalized weight on price inflation volatility. In line with Levin et al. (2005), the level of welfare when adding Δwt is substantially higher (by 32.8 percent, when measured by the decrease in loss) than under the benchmark case.38

Table 3:

Variations of the Loss Function: Gap Variables in (27).

article image
Note: The table reports variations of the simple objective (27). ytgap is used as the measure of xt, and ltgap is used as the measure of et. The numbers in the “Gain” column are computed as 100(1CEVLFalt0.044), where CEVLFalt is the CEV for the alternative loss function and 0.044 is the “Benchmark” objective CEV (row 1). A “*” after a coefficient implies that the value of this coefficient has been imposed.

In our framework with inefficient cost-push shocks and capital accumulation, the introduction of Δwta in the loss function does not make the presence of ytgap irrelevant, supporting the results we established in Section 3.1 with the EHL model. The third row makes this clear by showing that the welfare loss is very high for a mandate which includes both price and wage inflation but imposes a low weight on the output gap.39 Moreover, we learn from the fourth row in the table that, although Δwta receives a larger coefficient than πta, responding to price inflation is still welfare enhancing; when dropping πta the welfare gain is somewhat lower than in the trivariate loss function. Also, the optimal weight on economic activity remains high.

The fifth column of Table 3 adds the labor market gap as an additional target variable. Unlike wage inflation, the inclusion of the labor market gap by itself does not increase welfare much. Moreover, given that price inflation is the nominal anchor, replacing the output gap with the labor gap results in a welfare deterioration of about 14 percent relative to our benchmark specification as can be seen from the sixth row. However, when price inflation is also replaced by wage inflation as a target variable, the labor gap performs much better and generates a substantial welfare gain of 63 percent relative to our benchmark specification.

In Figure 5, we plot CEV as a function of λa for a simple mandate targeting price inflation and the output gap as well as a mandate targeting wage inflation and the labor market gap. Interestingly, we see from the figure that λa has to exceed 2 in order for the wage-labor simple mandate to dominate. So although the wage-labor gap mandate dominates the inflation-output gap mandate, the figure makes clear that a rather large λa is required for this to happen; strict nominal wage inflation targeting is thus very costly for society in terms of welfare. On the other hand, a beneficial aspect of the wage inflation-labor gap mandate is that if λa indeed exceeds this threshold, then the CEV stays essentially flat instead of slightly increasing as is the case for the inflation-output gap mandate.

Figure 5:
Figure 5:

CEV (in percentage points) as Function of λa for Alternative Simple Mandates.

Citation: IMF Working Papers 2017, 164; 10.5089/9781484309278.001.A001

Note: The figure plots the CEV (in %) for the simple mandate with price inflation and output gap (solid line) and wage inflation and labor gap (dashed line). The coordinate with an ‘o’ mark shows the CEV for the optimized weight.

We also examine the role of labor market variables when only observable variables are included; hence, we consider levels instead of gap variables. As shown in Table 4, the role played by nominal wage inflation is not as prominent when xt in (27) is represented by the level of output (as deviation from a linear trend) instead of the output gap. The welfare gain relative to the benchmark case is only 5.3 percent higher when wage inflation is included. Accordingly, welfare is reduced by one percent—the third row—when price inflation is omitted. On the other hand, adding hours worked per capita enhances the welfare of households by nearly 30 percent. Finally, we see from the last row that a mandate with only wage inflation and hours worked performs the best, reducing the welfare cost associated with the simple mandate by nearly 34 percent relative to the benchmark objective.

Table 4:

Variations of the Loss Function: Level Variables in (27).

article image
Note: The table reports variations of the simple objective (27). yty¯t is used as the measure for xt, and ltl¯ is used as the measure of et. The numbers in the “Gain” column are computed as 100(1CEVLFalt0.2440), where CEVLFalt is the CEV for the alternative loss function and 0.2440 is the “Benchmark” objective CEV (row 1).

Our conclusion is that, while a standard objective with price inflation and the output gap generates small welfare losses relative to the Ramsey policy (just above 0.04% of the steady-state consumption), it makes sense within the SW model—which features substantial frictions in the labor market—to target wage inflation and a labor market gap instead. Doing so would reduce the welfare costs of the simple mandate even further. Moreover, we have shown that this conclusion is robust even if one considers the level of output and hours worked instead of their deviations from potential.

5.2 Volatility of Interest Rates

In addition to inflation and some measure of resource utilization, simple objectives often include a term involving the volatility of interest rates; see e.g. Rudebusch and Svensson (1999). In practice, this term is often motivated by reference to “aversion to interest-rate variability” and financial stability concerns. From a theoretical perspective, Woodford (2003) derives an extended version of (21) augmented with an interest rate gap term λr(rtara)2 when allowing for monetary transactions frictions (rtara is the deviation of the annualized nominal policy rate rta around the steady-state annualized policy rate ra).

As an alternative, some researchers (e.g. Rudebusch and Svensson, 1999) and policymakers (e.g. Yellen, 2012) instead consider augmenting the objective function with the variance of the change in the short-run interest rate, λr(Δrta)2. By allowing for a lag of the interest rate in the loss function, the specification introduces interest rate smoothing, as the reduced-form solution will feature the lagged interest rate in the central bank’s reaction function. Both specifications, however, will reduce volatility of policy rates because the central bank will, ceteris paribus, tend to be less aggressive in the conduct of monetary policy when λr > 0.

The first row in Table 5 considers the standard Woodford (2003) specification with only xt as an additional variable to inflation as in (21). The second row in the table includes the (rtara)2 term in the loss function and uses Woodford’s (2003) weights for economic activity and the interest rate (0.048 and 0.077, respectively). The third row reports results for Yellen’s (2012) specification of the loss function which includes the (Δrta)2 term in the loss function instead of (rtara)2 and uses the weights (0.25 and 1.00, respectively). Finally, the last two rows present results when the coefficient on xt and the interest rate gap—row 4—and the change in the interest rate gap—row 5—are optimized to maximize the welfare of the households.

Table 5:

Sensitivity Analysis: Minimization of (21) with an interest rate term.

article image
Note: The loss function with the level of the interest rate is specified as (πtaπa)2+λaxt2+λr(rtara)2, while the loss function with the change in the interest rate is specified as (πtaπa)2+λaxt2+λr(Δrta)2. See the notes to Table 1 for further explanations.

Turning to the results, we see by comparing the first and second rows in the table that the CEV is not much affected by the introduction of the interest term for the output gap and output. Comparing the third row—the Yellen parameterization—with the Woodford specification in the second row, we see that while welfare improves considerably for all three different xt variables, it is only for output growth that this improvement stems from the interest rate term. For the output gap and output, the improvement is mostly due to the higher λa, which can be confirmed by comparing the dual mandate row in Table 1 with the third row in Table 5.

When allowing for optimal weights (the last two rows in Table 5), we find that the optimized weight on the interest rate term in both cases is driven towards zero for the output gap, implying that the welfare consequences are marginal. Only for output and output growth do we find modest welfare improvements from including either of the two interest rate terms (compared with our benchmark results in Table 1, where CEV equaled 0.244 and 0.302 for output and output growth, respectively). However, in all cases our key finding holds up—some measure of real activity should carry a large weight.

One of the concerns for financial stability is that the nominal interest rate is conventionally the key instrument of monetary policy. In this vein, high volatility of interest rates could be problematic for financial markets if such policies were implemented. An additional concern is whether the probability distribution of nominal rates for the mandates under consideration covers the negative range in a nontrivial way. One of the advantages of specifying a simple mandate, rather than a simple interest rate rule, is that the central bank can choose to use a variety of instruments to implement the desired objective. Besides nominal interest rates, such instruments can include forward guidance, reserve requirements, asset purchases, money instruments, and other tools. So, even though the zero lower bound on nominal interest rates per se is less of a concern in our analysis, we still want examine to what extent our results are robust to limiting the short-term variability of monetary policy. Although the inclusion of rtara or Δrta does not improve welfare much, they offer a simple way to examine the extent to which including these interest rate terms mitigates any excessive volatility.40

To do this exercise, we use a standard approach to limit the standard deviation of the nominal interest rate: Rotemberg and Woodford (1998) adopted the rule of thumb that the steady-state nominal rate minus two standard deviations (std) for the rate should be non-negative. Others, like Adolfson et al. (2011) adopted a three std non-negativity constraint. Since our parameterization of the SW model implies an annualized nominal interest rate of 6.25 percent, the allowable std is 3.125 under the Rotemberg and Woodford’s rule of thumb and slightly below 2.1 under the stricter three-std criterion adopted by Adolfson et al. (2011).

Table 6 reports the result of our exercise. For brevity of exposition we focus on the output gap only, but the results are very similar for output level and output growth. As seen from the first three rows in the table, the objective functions in Table 1 that involve only inflation and the output gap are indeed associated with high interest rate volatility. The std’s are all around 9 percentage points—a few times bigger than our thresholds. Hence, these loss functions are contingent on unrealistically large movements in the short-term policy rate. Turning to the fourth and fifth rows, which report results for the Woodford and Yellen loss functions augmented with interest rate terms, we see that the std’s for the policy rate shrink by almost a factor of ten; these specifications are hence clearly consistent with reasonable movements in the stance of monetary policy.

Table 6:

Interest Rate Volatility for Output Gap in Loss Function.

article image
Note: std(rta) denotes the standard deviation for the annualized nominal interest rate. ytgap is used as the measure of xt in the loss function. The * in the last two rows denote that these values have been fixed, and are hence not optimized.

The last two rows in the table report results when we re-optimize the weight on the output gap (λa), given a weight of 0.077 for (rtara)2 (next-to-last row) and 1 for (Δrta)2 (last row) in the loss function. As seen from the last column, these policies generate considerably lower interest rate volatility relative to the optimized loss function which excludes any interest rate terms, and the obtained std’s are in line with even the three-std threshold applied by Adolfson et al. (2011). To compensate for the interest rate terms, the optimization generates a slightly higher λa compared with the simple loss function with the output gap only. Overall, the lower flexibility to adjust policy rates is associated with lower welfare; the CEV roughly doubles in both cases. But it is notable that the CEV does not increase to the same extent as std(rta) is reduced, reflecting that the central bank—which is assumed to operate under commitment—can still influence the long-term interest rate effectively by smaller but persistent movements of the short-term policy rate. Therefore, we can conclude that our benchmark result of a large weight on the real activity term holds for a plausible degree of interest rate volatility.

It should be noted that the favourable performance of the nominal wage growth-labor gap simple mandate in Table 3 is also contingent on a relatively high interest rate volatility. However, when we augment the wage-labor loss function with an interest rate term, we find that the CEV is about twice as low as the inflation-output gap based objective which imposes the same interest rate volatility. Thus, the labor based mandate still outperforms the inflation-output gap mandate, conditional on much less volatile policy rates.

5.3 Robustness to measurement errors

A common counterargument for assigning a prominent role to the output gap is that it is measured with considerable error in real time (see e.g. McCallum, 2001). Indeed, the output gap is given by the difference between the actual level of output from its potential counterpart, and both are measured with errors in real time. We therefore examine the robustness of our main findings to the presence of significant measurement errors.41

To that end, we consider a case where the central bank has available imperfect measures of output and potential output in real time, so that it observes


where the notation t|t reflects the real time dimension in the measurement of actual and potential output. Following Orphanides and Williams (2002), we assume that the difference between the observed ytgap,obs and the true output gap ytgap (see eq. 22) evolves according to an AR(1) process


where 0 < ρ < 1 and εt ~ N(0, σε) is an exogenous error term. We then calculate the optimal weight λa in a loss eq. (21), but now considering that the central bank responds to the observed output gap ygap,obs rather than to ytgap.

We consider three alternative calibrations for the parameters ρ and σε. First, we set ρ = 0.95 and σε = 0.36, consistenly with the estimates obtained by Orphanides and Williams (2001) for the period 1969Q1-2002Q2. Second, we consider the values obtained in Rudebusch (2001) using official real-time estimates of the output gap, namely ρ = 0.75 and σε = 0.84. Finally, we re-estimate eq. (29) for the Smets-Wouters sample period (1965-2004) using real time data from the Philadelphia Fed to incorporate revisions in data vintages that may lead to an additional source of measurement errors. Specifically, we compute a series for ytgap,obs using the last HP-filtered observation in each vintage of the GDP releases (for the first vintage covering period t, actual output in period t is our estimate of yt|t and the HP-trend value for this vintage in period t is the estimate of yt|tpot), while ytgap is simply measured as the HP-filtered GDP series available today. The resulting estimates are ρ = 0.92 and σε = 0.63.

Clearly, all these calibrations capture well the errors associated with measuring output in real time, but may underestimate the errors in calculating the potential level of output because true potential output may not be well approximated by a one-sided HP-filter.42 Nevertheless, our crude approach of measuring ytgap provides a higher unconditional volatility of the measurement error (σɛ2/(1ρ2))=2.58 compared to Rudebusch’s 1.61). As such, our values could be viewed as conservative estimates of the size of measurement errors.43

Results are summarized in Table 7. For all the calibrations considered, the optimal weight λa is large, and always remains above 0.9. Interestingly, Table 7 also shows that the CEV is still lower when the gap is measured with errors compared to when either detrended output or output growth replaces the gap as a target variable in the objective. In a “worst case” scenario, CEV equals about 0.21 (Rudebusch’s estimates). For output as deviation from trend and output growth, Table 1 shows that CEV equals 0.24 and 0.30, respectively. Consequently, our results suggest that attaching a high weight the observed output gap, even though it is measured with significant errors, enhances welfare, and could be a better alternative than targeting more directly observable measures of economic activity.

Table 7:

Results when the output gap is measured with errors.

article image
Note: The table reports optimized weights on the output gap in the loss function (21) under alternative assumptions about the influence of measurement errors. The first row assumes that the output gap is measured without errors, the second uses the Orphanides and Williams (2002) calibration with ρ = 0.95 and σε = 0.36 in eq. (29), the third uses Rudebusch (2001) estimates ρ = 0.75 and σε = 0.84, and the fourth uses our naive approach with HP-filtered data which gives ρ = 0.92 and σε = 0.63.

6 Conclusions

There appears to be broad consensus among academics that central banks should primarily focus on price stability and devote only modest effort to stabilize measures of real economic activity. Many influential studies in the monetary policy literature show that such a behavior would deliver the best possible policy from a social welfare perspective. Given this, it is not surprising that essentially all instrument-independent central banks have been asked to focus on price stability with little or no role for stabilizing some measure of resource utilization; the outlier is the U.S. Federal Reserve that has a strong focus on economic activity through its dual mandate. The question is then: Is the Fed’s dual mandate redundant or even welfare deteriorating?

This paper examined this question within the context of an estimated medium-scale model for the US economy, and showed that the prevailing consensus may not be right. Looking at measures of economic activity seems to be more important than previously recognized in academia and in policy circles. And although our results are based on a model estimated for the U.S. economy, our result is relevant to all economies affected by non-trivial real rigidities and inefficient shocks, thus displaying a relevant trade-off between stabilizing inflation and economic activity. For instance, the similarities in parameter estimates of macromodels of the Euro area (see e.g. Adolfson et al. 2005, and Smets and Wouters, 2003) and the U.S. suggest that our results should be relevant for Europe as well.

In practice, it is of course difficult to assess the importance of real rigidities and the role inefficient shocks may play in magnifying policy trade-offs. But that argument does not invalidate our main conclusion. A central bank that assigns a high weight to measures of economic activity would deliver good economic outcomes even in the absence of relevant policy trade-offs.44

Furthermore, while a standard objective with equal weights on price inflation and the output gap generates small welfare losses relative to the Ramsey policy, we have shown that it makes sense within the SW model—which features substantial frictions in the labor market—to target nominal wage inflation and a labor market gap instead. Still, because the SW model does not incorporate several realistic frictions in the labor market—such as imperfect risk sharing due to unemployment risk or search frictions—it would be interesting to extend the analysis into models that are more realistic along those dimensions, such as the models by Gali, Smets and Wouters (2011), Ravenna and Walsh (2012a,b) among others. It is conceivable that the optimal weight on economic activity and labor variables would be even higher if we had considered additional frictions in labor markets. Even so, we acknowledge the political difficulties of targeting certain labor market variables (like the rate of increase in nominal wages), which in practice likely means that the most important aspect of these results is that we find a robust and important role for economic activity in the central banks objective (may it be output or hours worked) even without additional frictions in labor markets.

During the recent financial crisis many central banks, including the Federal Reserve and the Bank of England, cut policy rates aggressively to prevent further declines in resource utilization although the fall in inflation and inflation expectations were modest. By traditional metrics, such as the Taylor (1993) rule, these aggressive and persistent cuts may be interpreted as a shift of focus from price stability to resource utilization by central banks during and in the aftermath of the recession. Our results make the case for a stronger response to measures of economic activity even during normal times. In our model, the policy trade-offs mainly arise from imperfections in goods and labor markets. Considering an economy where inefficiencies are primarily associated with frictions in the financial markets would be an interesting extension to address some of the recent debates. Recent work by Laureys, Meeks, and Wanengkirtyo (2016) suggests that including financial variables in the central bank’s loss function improves welfare, but that the weight on financial variables is low and the weight on the output gap remains very high. This is supportive of the central tenet in our paper, but further work in this important area is needed before one can draw firmer conclusions.

Using a calibrated open-economy model, Benigno and Benigno (2008) studied how international monetary cooperative allocations could be implemented through inflation targeting aimed at minimizing a quadratic loss function consisting of only domestic variables such as GDP, inflation, and the output gap. It would thus be interesting to extend our investigation to an open economy framework with an estimated two-country model of, for example, the United States and the euro area. Another interesting extension would be to examine our results in models with additional labor market dynamics and frictions.

Finally, our analysis postulated that central banks operate in an almost ideal situation, with the exception of not being able to measure the output gap accurately in real time. In this respect our approach could be extended to study the design of simple policy objectives in even more realistic situations, in which the central bank faces uncertainty about the structure of the underlying economy or cannot implement their desired policies because of implementation lags or credibility problems.

Appendix A The Linear Quadratic Approximation

In this appendix, we provide the details of the linear quadratic approximation that was used in our paper. We show that our algorithm can handle the case of a distorted steady state and generates the correct linear approximation. In addition, we provide conditions under which our legitimate linear quadratic approximation approach and a simpler illegitimate approach provide the same results.

A.1 A General Non-Linear Problem

We first specify the general non-linear problem in order to establish the conditions under which the linear-quadratic approximation is an accurate approximation.

Consider the following optimization problem:


where U is the non-linear objective function, G is the vector of m non-linear constraints, and y is the vector of variables where for convenience we consider controls and states jointly. A dynamic problem can be accommodated in this notation by appropriately defining U, G, and y.A.1

Taking first order conditions one obtains:


where γ is a vector of Lagrange multipliers. After linearizing the first order conditions and the constraints, one obtains:


where variables and functions with bars are evaluated at the steady state. The system of equations determines the solution of the non-linear system where the laws of motion are approximated to first order. In a dynamic context, standard techniques can be used to compute the solution to this system of equations, for instance the method outlined by Anderson and Moore (1985).

A.2 Linear-Quadratic Approximation: A General Approach

A second-order approximation to utility yields:


A second-order approximation to a constraint m yields


One can sum equation (A.5) and equations (A.6) for each m with weights 1 and γ¯. This operation is valid since the constraints are equal to zero. In this case we obtain:


Noting that U¯y+Σmγ¯mG¯ym=U¯y+γ¯G¯y=0 where the last equality comes from using equation (A.2) at the steady state, one can simplify equation (A.7) further:


Now use the transformed objective function (A.8) in the maximization problem:


Taking first order conditions one obtains:


Since equation (A.10) is equal to equation (A.3), which is valid in any model, we can conclude that our approach is valid in general, even in models with a distorted steady state.A.2 This is the approach we employ in the paper since it also delivers correct results with a distorted steady state. The reader is referred to Benigno and Woodford (2012) for additional details.

A.3 Linear-Quadratic Approximation: A Simple Approach for a Non-Distorted Steady State

There is a simpler approach but it only delivers correct results if the steady state is non-distorted. The problem of maximizing the second-order approximation to utility in equation (A.5) subject to a first-order approximation to the constraints is


Taking first order conditions one obtains:


Equation (A.3) is directly comparable with equation (A.12). As is easily seen, this LQ approach does not usually give the correct solution.A.3 Benigno and Woodford (2012) referred to this alternative linear-quadratic approximation as a “naive” LQ approximation.

In special circumstances, the direct approach leading to equation (A.12) yields the correct solution. This is the case when the economy at steady state is at the unconstrained optimum. Also one needs to use substitution of variables such that all market clearing conditions and feasibility are not present in G(y). That means that:


and hence according to equation A.2:


In this case, equations (A.12) and (A.10) coincide, and are given by:


Note that it is not required that the economy is always at the unconstrained optimum. It suffices that is the case at the steady state.A.4 This approach is used for instance in Levine, McAdam and Pearlman (2008).

In our case, the difference between output with the distorted and non-distorted steady state is 6%. Woodford (2003) shows that if distortions are small then the optimal response to economic shocks does not change. Still, we employ the approach that can handle the distorted steady state since this is the empirically more realistic benchmark.

Appendix B Additional Analytical Results

B.1 FOCs in Canonical Sticky Price and Wage Model

Minimizing the loss function (12), subject to (9)-(11) one obtains the first-order conditions


where ς1,t, ς2,t, ς3,t are Lagrange multipliers. In the particular case with and κp = κωκ and λwopt = ϑp/ϑω combining eqs. (B.16)-(B.18) gives the targeting rule


which coincides with eq. (15) in the main text.

B.2 Inefficient Cost-Push Shocks

We discuss here the impact of cost-push shocks. In order to do this, we consider a simplified version of the model with perfect competition in the labor market. But the results we present below generalize to the case with sticky wages and exogenous wage markup shocks.

In the standard New Keynesian model with sticky prices only, the Ramsey policy is


where ut in the second equation represents a cost-push shock, i.e. inefficient exogenous variations in the markup of the firms in the monopolistic goods sector. The laws of motion of the economy are given by the NK Phillips curve and the optimality condition


for t ≥ 1 and π0=λoptκy0gap for t = 0. With cost-push shocks, the solution of the Ramsey system is:


where δ114βa22aβ and aλλopt(1+β)+κ2 For a central bank with a mandate in which the weight on the output gap is λ, the laws of motion take the same functional form but where λopt is substituted by λ. Hence, it is easy to see that if the central bank’s λ differs from λopt, the solution for the simple mandate does not mimic that under the optimal policy. An important implication is that complete stabilization of the output gap is generally non-optimal when cost-push shocks are present.

In this model, the Blanchard-Galí’s (2008) divine coincidence result only holds when cost push shocks are not present, that is σ (ut) = 0. Without cost-push shocks, the solution is given by πt = ytgap = 0 ∀t for any weight λ ≥ 0. Thus, the simple mandate mimics the optimal policy for any choice of λ. The same result holds in a model with sticky-wages and flexible prices. This result clarifies why in Figure 3 without inefficient shocks (the blue line) welfare as a function of λ ≥ 0 is essentially flat and why there is curvature with the benchmark calibration.

Appendix C The Smets and Wouters (2007) Model

Below, we describe the firms’ and households’ problem in the model, and state the market clearing conditions.C.5

C.1 Firms and Price Setting

Final Goods Production The single final output good Yt is produced using a continuum of differentiated intermediate goods Yt(f). Following Kimball (1995), the technology for transforming these intermediate goods into the final output good is


Following Dotsey and King (2005) and Levin, López-Salido and Yun (2007) we assume that Gy (.) is given by a strictly concave and increasing function; its particular parameterization follows SW:


where ϕp ≥ 1 denotes the gross markup of the intermediate firms. The parameter p governs the degree of curvature of the intermediate firm’s demand curve. When p = 0, the demand curve exhibits constant elasticity as with the standard Dixit-Stiglitz aggregator. When p is positive—as in SW—this introduces more strategic complementarity in price setting which causes intermediate firms to adjust prices less to a given change in marginal cost.

Firms that produce the final output good Yt are perfectly competitive in both the product and factor markets, and take as given the price Pt (f) of each intermediate good Yt(f). They sell units of the final output good at a price Pt, and hence solve the following problem:


subject to the constraint (C.25).

Intermediate Goods Production A continuum of intermediate goods Yt(f) for f ∈ [0,1] is produced by monopolistically competitive firms, which utilize capital services Kt (f) and a labor index Lt (f) (defined below) to produce its respective output good. The form of the production function is Cobb-Douglas:


where γt represents the labour-augmenting deterministic growth rate in the economy, Φ denotes the fixed cost (which is related to the gross markup ϕp so that profits are zero in the steady state), and ɛta is total factor productivity which follows the process


Firms face perfectly competitive factor markets for renting capital at price RKt and hiring labor at a price given by the aggregate wage index Wt (defined below). As firms can costlessly adjust either factor of production, the standard static first-order conditions for cost minimization imply that all firms have identical marginal cost per unit of output.

The prices of the intermediate goods are determined by Calvo-Yun (1996) style staggered nominal contracts. The probability 1 − ξp that any firm f receives a signal to re-optimize its price Pt(f) is assumed to be independent of the time that it last reset its price. If a firm is not allowed to optimize its price, it adjusts its price by a weighted combination of the lagged and steady-state rate of inflation, i.e., Pt(f) = (1+πt−1)ιp (1+π)1−ιp Pt−1(f) where 0 ≤ ιp ≤ 1 and πt−1 denotes net inflation in period t − 1, and π the steady-state net inflation rate. A positive value of ιp introduces structural inertia into the inflation process. All told, this leads to the following optimization problem for the intermediate firms


where P˜t(f) is the newly set price. Notice that with our assumptions all firms that re-optimize their prices actually set the same price.

It would be ideal if the markup in (C.26) can be made stochastic and the model can be written in a recursive form. However, such an expression is not available, and we instead directly introduce a shock ɛtp in the first-order condition to the problem in (C.30). And following SW, we assume the shock is given by an exogenous ARMA (1,1) process:


When this shock is introduced in the non-linear model, we put a scaling factor on it so that it enters exactly the same way in a log-linearized representation of the model as the price markup shock does in the SW model.C.6

C.2 Households and Wage Setting

We assume a continuum of monopolistically competitive households (indexed on the unit interval), each of which supplies a differentiated labor service to the production sector; that is, goods-producing firms regard each household’s labor services Lt(h), h ∈ [0,1], as imperfect substitutes for the labor services of other households. It is convenient to assume that a representative labor aggregator combines households’ labor hours in the same proportions as firms would choose. Thus, the aggregator’s demand for each household’s labor is equal to the sum of firms’ demands. The aggregated labor index Lt has the Kimball (1995) form:


where the function GL (.) has the same functional form as (C.26), but is characterized by the corresponding parameters w (governing convexity of labor demand by the aggregator) and ϕw (gross wage markup). The aggregator minimizes the cost of producing a given amount of the aggregate labor index Lt, taking each household’s wage rate Wt (h) as given, and then sells units of the labor index to the intermediate goods sector at unit cost Wt, which can naturally be interpreted as the aggregate wage rate.

The utility function of a typical member of household h is


where the discount factor β satisfies 0 < β < 1. The period utility function depends on household h’s current consumption Ct(h), as well as lagged aggregate per capita consumption to allow for external habit persistence through the parameter 0 ≤ Ϟ ≤ 1. The period utility function also depends inversely on hours worked Lt(h).

Household h’s budget constraint in period t states that its expenditure on goods and net purchases of financial assets must equal its disposable income:


Thus, the household purchases part of the final output good (at a price of Pt), which it chooses either to consume Ct (h) or invest It (h) in physical capital. Following Christiano, Eichenbaum, and Evans (2005), investment augments the household’s (end-of-period) physical capital stock Kt+1p(h) according to


The extent to which investment by each household h turns into physical capital is assumed to depend on an exogenous shock ɛti and how rapidly the household changes its rate of investment according to the function S(It(h)It1(h)) which we specify as


Notice that this function satisfies S (γ) = 0, S′ (γ) = 0 and S″ (γ) = φ. The stationary investment-specific shock ɛti follows


In addition to accumulating physical capital, households may augment their financial assets through increasing their government nominal bond holdings (Bt+1), from which they earn an interest rate of Rt. The return on these bonds is also subject to a risk-shock, ɛtb, which follows


Agents can engage in frictionless trading of a complete set of contingent claims to diversify away idiosyncratic risk. The term ∫sξt,t+1BD,t+1(h) – BD,t(h) represents net purchases of these state-contingent domestic bonds, with ξt,t+1 denoting the state-dependent price, and BD,t+1 (h) the quantity of such claims purchased at time t.

On the income side, each member of household h earns after-tax labor income Wt (h) Lt (h), after-tax capital rental income of RtkZt(h)Ktp(h), and pays a utilization cost of the physical capital equal to a(Zt(h)Ktp(h)) where Zt (h) is the capital utilization rate, so that capital services provided by household h, Kt (h), equals Zt(h)Ktp(h). The capital utilization adjustment function a (Zt (h)) is assumed to be given by


where rk is the steady state net real interest rate (R¯tK/P¯t). Notice that the adjustment function satisfies a(1) = 0, a′(1) = rk, and arkz˜1. Following SW, we want to write a″(1) = z1 = ψ/ (1 – ψ) > 0, where ψ ∈ [0,1) and a higher value of ψ implies a higher cost of changing the utilization rate. Our parameterization of the adjustment cost function then implies that we need to set z˜1z1/rk. Finally, each member also receives an aliquot share Γt (h) of the profits of all firms, and pays a lump-sum tax of Tt (h) (regarded as taxes net of any transfers).

In every period t, each member of household h maximizes the utility function (C.33) with respect to its consumption, investment, (end-of-period) physical capital stock, capital utilization rate, bond holdings, and holdings of contingent claims, subject to its labor demand function, budget constraint (C.34), and transition equation for capital (C.35).

Households also set nominal wages in Calvo-style staggered contracts that are generally similar to the price contracts described previously. Thus, the probability that a household receives a signal to re-optimize its wage contract in a given period is denoted by 1 – ξw. In addition, SW specify the following dynamic indexation scheme for the adjustment of the wages of those households that do not get a signal to re-optimize: Wt(h) = γ (1+πt-1)ιω (1+π)1-ιω Wt-1(h). All told, this leads to the following optimization problem for the households


where W˜t(h) is the newly set wage; notice that with our assumptions all households that reoptimize their wages will actually set the same wage.

Following the same approach as with the intermediate-goods firms, we introduce a shock ɛtw in the resulting first-order condition. This shock, following SW, is assumed to be given by an exogenous ARMA (1,1) process


As discussed previously, we use a scaling factor for this shock so that it enters in exactly the same way as the wage markup shock in SW in the log-linearized representation of the model.

C.3 Market Clearing Conditions

Government purchases Gt are exogenous, and the process for government spending relative to trend output, i.e. gt = Gt/ (γtY), is given by the following exogenous AR(1) process:


Government purchases have no effect on the marginal utility of private consumption, nor do they serve as an input into goods production. Moreover, the government is assumed to balance its budget through lump-sum taxes (which are irrelevant since Ricardian equivalence holds in the model).

Total output of the final goods sector is used as follows:


where a(Zt)K¯t is the capital utilization adjustment cost.

Finally, one can derive an aggregate production constraint, which depends on aggregate technology, capital, labor, fixed costs, as well as the price and wage dispersion terms.C.7

C.4 Model Parameterization

When solving the model, we adopt the parameter estimates (posterior mode) in Tables 1.A and 1.B of SW. We also use the same values for the calibrated parameters. Table A1 provides the relevant values.

Table A.1:

Parameter Values in Smets and Wouters (2007).

article image
Note: SW estimates ρr = 0.12 and σr = 0.24, but in our optimal policy exercises these parameters are not present.

There are two issues to notice with regards to the parameters in Table A1. First, we adapt and re-scale the processes of the price and wage markup shocks so that when our model is log-linearized it matches exactly the original SW model. Second, we set the monetary policy shock parameters to nil, as we restrict our analysis to optimal policy.

Appendix D Speed Limit Policies & Price- and Wage-Level Targeting

In this appendix, we examine the performance of speed limit policies (SLP henceforth) advocated by Walsh (2003) and price- and wage-level targeting.

We start with an analysis of SLP. Walsh’s formulation of SLP considered actual growth relative to potential (i.e. output gap growth), but we also report results for actual growth relative to its steady state to understand how contingent the results are on measuring the change in potential accurately. Moreover, since the results in the previous subsection suggested that simple mandates based on the labor market performed very well, we also study the performance of SLP for a labor market based simple mandate.

We report results for two parameterizations of the SLP objective in Table D.1. In the first row, we use the benchmark weight derived in Woodford (2003). In the second row, we adopt a weight that is optimized to maximize household welfare. Interestingly, we see that when replacing the level of output growth with the growth rate of the output gap (Δytgap), welfare is increased substantially, conditional on placing a sufficiently large coefficient on this variable. However, by comparing these results with those for ytgap in Table 1, we find it is still better to target the level of the output gap.

Turning to the SLP objectives based on nominal wage inflation and hours, we see that they perform worse than the standard inflation-output objectives unless the weight on the labor gap is sufficiently large. As is the case for output, the growth rate of the labor gap is preferable to the growth rate of labor itself. But by comparing these results with our findings in Table 3 we see that targeting the level of the labor gap is still highly preferable in terms of maximizing welfare of the households.

Table D.1:

Sensitivity Analysis: Merits of Speed Limit Policies.

article image
Note: The loss function under price inflation is specified as in (21), while the loss function with the annualized nominal wage inflation rate wage is specified as (ΔwtaΔwa)+λaxt2, where Δwa denotes the annualized steady state wage inflation rate; see eq. (27). Δyt denotes annualized output growth as deviation from the steady state annualized growth rate (4 (γ – 1)). Δytgap is the annualized rate of growth of output as deviation from potential, i.e. 4(ΔytΔytpot). The same definitions apply to hours worked. See the notes to Table 1 for further explanations.

Several important papers in the previous literature have stressed the merits of price level targeting as opposed to the standard inflation targeting loss function, see e.g. Vestin (2006). Price level targeting is a commitment to eventually bring back the price level to a baseline path in the face of shocks that create a trade-off between stabilizing inflation and economic activity. Our benchmark flexible inflation targeting objective in eq. (21) can be replaced with a price level targeting objective as follows:


where pt is the actual log-price level in the economy and p¯t is the target log-price level path which grows with the steady-state net inflation rate π according to p¯t=π+p¯t1. When we consider wage level targeting we adopt a specification isomorphic to that in (D.44), but replace the first term with wtw¯t where wt is the nominal actual log-wage and w¯t is the nominal target log-wage which grows according to w¯t=ln(γ)+π+w¯t1, where γ is the gross technology growth rate of the economy (see Table A.1).

In Table D.2, we report results for both price- and wage-level targeting objectives. As can be seen from the table, there are no welfare gains from pursuing price-level targeting relative to our benchmark objective in Table 2, regardless of whether one targets the output or the hours gap. For wage-level targeting, we obtain the same finding (in this case, the relevant comparison is the wage-inflation hours-gap specification in Table 3 which yields a CEV of 0.016). These findings are perhaps unsurprising, given that the welfare costs in our model are more associated with changes in prices and wages (because of indexation) than with accumulated price- and wage-inflation rates.

Table D.2:

Sensitivity Analysis: Merits of Price and Wage Level Targeting.

article image
Note: The loss function under price-level targeting is given by (D.44), while the loss function with the nominal wage level is specified as Lta=(wtw¯t)2+λaxt2. See the notes to Table 1 for further explanations.


  • Adjemian, Stéphane, Matthieu Darracq Pariès, and Stéphane Moyen. 2008. “Towards a Monetary Policy Evaluation Framework.ECB Working Paper Series No. 942.

    • Search Google Scholar
    • Export Citation
  • Adolfson, Malin, Stefan Laséen, Jesper Lindé and Mattias Villani. 2005. “The Role of Sticky Prices in an Open Economy DSGE Model: A Bayesian Investigation.Journal of the European Economic Association Papers and Proceedings 3(2-3): 444457.

    • Search Google Scholar
    • Export Citation
  • Adolfson, Malin, Stefan Laséen, Jesper Lindé, and Lars E.O. Svensson. 2011. “Optimal Monetary Policy in an Operational Medium-Sized DSGE Model.Journal of Money, Credit, and Banking 43(7): 12881331.

    • Search Google Scholar
    • Export Citation
  • Adolfson, Malin, Stefan Laséen, Jesper Lindé, and Lars E.O. Svensson. 2012. “Monetary Policy Trade-Offs in an Estimated Open-Economy DSGE Model.Journal of Economic Dynamics and Control, forthcoming.

    • Search Google Scholar
    • Export Citation
  • Anderson, Gary, and Moore, George. 1985. “A Linear Algebraic Procedure for Solving Linear Perfect Foresight Models.Economics Letters 17(3), 247252.

    • Search Google Scholar
    • Export Citation
  • Angeloni, Ignazio, Anil K Kashyap, Benoit Mojon, and Daniele Terlizzese. 2003. “The Output Composition Puzzle: A Difference in the Monetary Transmission Mechanism in the Euro Area and U.S.Journal of Money, Credit, and Banking 35, 12651306.

    • Search Google Scholar
    • Export Citation
  • Aoki, Kosuke. 2001. “Optimal Monetary Policy Responses to Relative-Price Changes.Journal of Monetary Economics 48, 5580.

  • Benigno, Pierpaolo, and Gianluca Benigno. 2008. “Implementing International Monetary Cooperation Through Inflation Targeting.Macroeconomic Dynamics 12(1): 4549.

    • Search Google Scholar
    • Export Citation
  • Benigno, Pierpaolo, and Michael Woodford. 2012. “Linear-Quadratic Approximation of Optimal Policy Problems.Journal of Economic Theory 147(1): 142.

    • Search Google Scholar
    • Export Citation
  • Bernanke, Ben S. 2013. “A Century of U.S. Central Banking: Goals, Frameworks, Accountability,” Remarks at “The First 100 Years of the Federal Reserve: The Policy Record, Lessons Learned, and Prospects for the Future.A conference sponsored by the National Bureau of Economic Research, Cambridge, Massachusetts.

    • Search Google Scholar
    • Export Citation
  • Blanchard, O., Jordi Galí. 2007. “Real Wage Rigidities and the New Keynesian Model.Journal of Money, Credit, and Banking 39(1): 3565.

    • Search Google Scholar
    • Export Citation
  • Bodenstein, Martin, Christopher J. Erceg and Luca Guerrieri. 2008. “Optimal monetary policy with distinct core and headline inflation rates.Journal of Monetary Economics 55: S18S33.

    • Search Google Scholar
    • Export Citation
  • Bodenstein, Martin, James Hebden, and Ricardo Nunes. 2012. “Imperfect Credibility and the Zero Lower Bound.Journal of Monetary Economics 59(2): 135149.

    • Search Google Scholar
    • Export Citation
  • Calvo, Guillermo A. 1983. “Staggered Prices in a Utility-maximizing Framework.Journal of Monetary Economics 12(3): 383398.

  • Chen, Xiaoshan, Kirsanova, Tatiana, and Campbell Leith. 2013. “How Optimal is US Monetary Policy?Stirling Economics Discussion Papers 2013-05.

    • Search Google Scholar
    • Export Citation
  • Christiano, Lawrence, Martin Eichenbaum, and Charles Evans. 2005. “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy.Journal of Political Economy 113(1): 145.

    • Search Google Scholar
    • Export Citation
  • Christiano, Lawrence, Roberto Motto, and Massimo Rostagno. 2010. “Financial Factors in Economic Fluctuations.ECB Working Paper No. 1192.

    • Search Google Scholar
    • Export Citation
  • Clarida, R., Jordi Galí, and M. Gertler. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.Journal of Economic Literature, 37(4): 16611707.

    • Search Google Scholar
    • Export Citation
  • Debortoli, Davide, and Aeimit Lakdawala. 2016. “How Credible is the Federal Reserve? A Structural Estimation of Policy Re-optimizations.American Economic Journal: Macroeconomics, 8(3): 4276.

    • Search Google Scholar
    • Export Citation
  • Debortoli, Davide, and Ricardo Nunes. 2014. “Monetary Regime Switches and Central Bank Preferences.Journal of Money, Credit and Banking, 46(8): 15911626.

    • Search Google Scholar
    • Export Citation
  • Debortoli, Davide, Junior Maih, and Ricardo Nunes. 2014. “Loose Commitment in Medium-Scale Macroeconomic Models: Theory and Applications.Macroeconomic Dynamics, 18(1): 175198.

    • Search Google Scholar
    • Export Citation
  • Dotsey, Michael, and Robert King. 2005. “Implications of State Dependent Pricing for Dynamic Macroeconomic Models.Journal of Monetary Economics 52: 213242.

    • Search Google Scholar
    • Export Citation
  • Edge, Rochelle. 2003. “A utility-based welfare criterion in a model with endogenous capital accumulation.Finance and Economics Discussion Series 2003-66, Board of Governors of the Federal Reserve System (U.S.).

    • Search Google Scholar
    • Export Citation
  • English, William, David López-Salido, and Robert Tetlow. 2013. “The Federal Reserve’s Framework for Monetary Policy – Recent Changes and New Questions.Manuscript. Federal Reserve Board.

    • Search Google Scholar
    • Export Citation
  • Erceg, Christopher J., Dale W. Henderson, and Andrew T. Levin. 1998Tradeoffs Between Inflation and Output-Gap Variances in an Optimizing-Agent Model.International Finance Discussion Papers No. 627, Federal Reserve Board.

    • Search Google Scholar
    • Export Citation
  • Erceg, Christopher J., Dale W. Henderson, and Andrew T. Levin. 2000. “Optimal Monetary Policy with Staggered Wage and Price Contracts.Journal of Monetary Economics 46: 281313.

    • Search Google Scholar
    • Export Citation
  • Galí, Jordi. 2008. Monetary Policy, Inflation and the Business Cycle: An Introduction to the New Keynesian Framework. Princeton, NJ: Princeton University Press.

    • Search Google Scholar
    • Export Citation
  • Galí, Jordi. 2011. “Monetary Policy and Unemployment.” In Handbook of Monetary Economics, vol. 3A, ed. Benjamin. Friedman and Michael Woodford, 487546. Elsevier B.V.

    • Search Google Scholar
    • Export Citation
  • Galí, Jordi, Frank Smets, and Rafael Wouters. 2011. “Unemployment in an Estimated New Keynesian Model.NBER Macroeconomics Annual 329360.

    • Search Google Scholar
    • Export Citation
  • Ilbas, Pelin. 2012. “Revealing the Preferences of the U.S. Federal Reserve.Journal of Applied Econometrics 27(3): 440473.

  • Jensen, Christian, and Bennett T. McCallum. 2010. “Optimal Continuation versus the Timeless Perspective in Monetary Policy.Journal of Money, Credit and Banking 42(6): 10931107.

    • Search Google Scholar
    • Export Citation
  • Juillard, Michel, Philippe Karam, Douglas Laxton, and Paolo Pesenti. 2006. “Welfare-Based Monetary Policy Rules in an Estimated DSGE Model of the US Economy.ECB Working Paper Series No. 613.

    • Search Google Scholar
    • Export Citation
  • Justiniano, Alejandro, Giorgio E. Primiceri, and Andrea Tambalotti. 2013. “Is There A Trade-Off Between Inflation and Output Stabilization?American Economic Journal: Macroeconomics 5(2): 131.

    • Search Google Scholar
    • Export Citation
  • Kim, Jinill, and Dale W. Henderson. 2005. “Inflation Targeting and Nominal-income-growth Targeting: When and Why Are They Suboptimal?Journal of Monetary Economics, Elsevier, vol. 52(8): 14631495.

    • Search Google Scholar
    • Export Citation
  • Kim, Jinill and Sunghyun H. Kim. 2007. “Two pitfalls of linearization methodsJournal of Money, Credit and Banking 39(4), 9951001.

    • Search Google Scholar
    • Export Citation
  • Kimball, Miles S. 1995. “The Quantitative Analytics of the Basic Neomonetarist Model.Journal of Money, Credit and Banking, vol. 27(4): 12411277.

    • Search Google Scholar
    • Export Citation
  • Klenow, Peter J. and Benjamin A. Malin (2010), “Microeconomic Evidence on Price-Setting”, Chapter 6 in Benjamin M. Friedman and Michael Woodford (Eds.), Handbook of Monetary Economics, Elsevier, New York.

    • Search Google Scholar
    • Export Citation
  • Laureys, Lien, Roland Meeks, and Boromeus Wanengkirtyo. 2016. “Should banks be central to central banking? Optimal monetary policy in the euro areaBank of England, Manuscript.

    • Search Google Scholar
    • Export Citation
  • Levin, Andrew T., Alexei Onatski, John C. Williams, and Noah M. Williams. 2005. “Monetary Policy Under Uncertainty in Micro-Founded Macroeconometric Models,NBER Macroeconomics Annual 20: 229 - 312.

    • Search Google Scholar
    • Export Citation
  • Levin, Andrew, David López-Salido, and Tack Yun. 2007. “Strategic Complementarities and Optimal Monetary Policy”, CEPR Discussion Papers No. 6423.

    • Search Google Scholar
    • Export Citation
  • Levine, Paul, Peter McAdam, and Joseph Pearlman. 2008. “Quantifying and sustaining welfare gains from monetary commitment.Journal of Monetary Economics 55, 12531276.

    • Search Google Scholar
    • Export Citation
  • Lucas, Robert. 1976. “Econometric Policy Evaluation: A Critique.Carnegie-Rochester Conference Series on Public Policy Vol(1): 1946.

    • Search Google Scholar
    • Export Citation
  • Lucas, Robert. 1987. Models of Business Cycles. Basil Blackwell, New York.