We reconsider the design of welfare-optimal monetary policy when financing frictions impair the supply of bank credit, and when the objectives set for monetary policy must be simple enough to be implementable and allow for effective accountability. We show that a flexible inflation targeting approach that places weight on stabilizing inflation, a measure of resource utilization, and a financial variable produces welfare benefits that are almost indistinguishable from fully-optimal Ramsey policy. The macro-financial trade-off in our estimated model of the euro area turns out to be modest, implying that the effects of financial frictions can be ameliorated at little cost in terms of inflation. A range of different financial objectives and policy preferences lead to similar conclusions.

Abstract

We reconsider the design of welfare-optimal monetary policy when financing frictions impair the supply of bank credit, and when the objectives set for monetary policy must be simple enough to be implementable and allow for effective accountability. We show that a flexible inflation targeting approach that places weight on stabilizing inflation, a measure of resource utilization, and a financial variable produces welfare benefits that are almost indistinguishable from fully-optimal Ramsey policy. The macro-financial trade-off in our estimated model of the euro area turns out to be modest, implying that the effects of financial frictions can be ameliorated at little cost in terms of inflation. A range of different financial objectives and policy preferences lead to similar conclusions.

I. Introduction

Microeconomic frictions between creditors and debtors are widely recognized as a principal amplifier, and an occasional source, of macroeconomic shocks. These financial factors matter for the transmission of monetary policy, for asset prices, and for macroeconomic stability (Adrian and Shin, 2011; Gertler and Kiyotaki, 2011). So it comes as little surprise that financial frictions have been found to have direct welfare costs that an optimal monetary policy should help to mitigate (Carlstrom, Fuerst, and Paustian, 2010; Cúrdia and Woodford, 2016). In practice, however, only a handful of inflation-targeting central banks say that policy is set with the need to stabilize financial factors in mind.1 And those central banks that do pay mind to financial stabilization in conducting their monetary policies—including Norway’s Norges Bank, the Reserve Bank of Australia, and the Bank of Korea—state their objective in broad terms, such as to counteract ‘the build-up of financial imbalances’.2 Broad objectives are helpful to the extent that they allow central banks flexibility in combatting perceived risks. But without a clearer sense of which frictions policy aims to address, and the trade-offs that combatting them entails, questions on the appropriateness of such policies unfortunately have few clear answers.

In this paper, we revisit the optimal design of welfare-based monetary policy in the presence of financial frictions that impair the supply of bank credit to firms. We focus on the realistic case where financial frictions contribute to inefficiently low output in steady state, where multiple nominal and real distortions may interact, and where there are multiple sources of disturbance. Little is known about how, if at all, monetary policy should operate to reduce financial volatility in this environment. But a recent paper by Debortoli and others (2018; hereafter, DKLN) offers reasons to suspect that a straightforward translation of monetary policy messages drawn from studies based on stylized model economies to an empirically-relevant setting may not be warranted. DKLN (2018) establish that in a standard medium-scale DSGE model, the optimal stabilization weight on output gap fluctuations is many times greater than in the stylized textbook model. Their result highlights the potential sensitivity of model-based guidance on optimal policies to precisely those conditions most likely to prevail in real-world policymaking. We build on their approach to investigate what financial frictions—modeled along the lines of Gertler and Karadi (2011)—imply about the optimal stabilization weight on various measures of financial imbalances, and so on the financial stabilization objectives that might be appropriate for central banks. Our results are based on a medium-scale DSGE model estimated on euro area data.3

Our main finding is that assigning a financial stabilization objective to monetary policy, alongside its traditional remit for inflation and output gap stabilization, yields welfare benefits comparable to those of the Ramsey policy. A ternary financial stabilization objective is therefore highly desirable, as it delivers welfare outcomes that are close to the best achievable, but with a remit that is easily codified and communicated. The key insight that our exercise provides is that financial frictions are welfare-relevant, but at the same time, the structure of the model, and the parameter estimates associated with that structure, imply that the macro-financial trade-off is modest. As a result, it is possible for monetary policy to act to moderate the distortions caused by financial frictions at only a small net cost in terms of the remaining nominal and real distortions in the model. The extended mandate is robust along a number of dimensions: (a) Although in our baseline exercise the objective of policy is banks’ loan-to-deposit spread, similar results hold for a measure of leverage, and for smoothing risk-free rates; (b) Our results do not depend on the existence of large financial shocks; And (c) the welfare benefit of financial stabilization is relatively insensitive to the precise weight the monetary policymaker might choose to place on the additional objective–so long as it is greater than zero.

Our paper also demonstrates that when a conventional flexible inflation targeting strategy (a dual mandate) is in place, central banks do best (in welfare terms) when they pursue an objective that is almost perfectly balanced between inflation and output gap stabilization. That finding shows that the results of DKLN (2018), that the dual mandate ‘makes sense’, carries across to a setting with financial frictions, and indeed for parameter estimates derived from euro area rather than US data. However, the dual mandate remains materially inferior to the Ramsey policy, and so to our financially-extended mandate. The intuition for our finding is that under the dual mandate, policy attempts to stabilize spreads by placing additional weight on output gap stabilization (relative to the case of a ternary mandate). That strategy works, to some extent, because output gap stabilization and credit spread stabilization are somewhat complementary. But the relationship is imperfect. As the output gap-inflation trade-off is unfavorable, given the presence of shocks to price and wage mark-ups in the data, inflation volatility is higher under the dual mandate, and welfare is lower.

In focusing on the role of welfare-optimal monetary policy in financial stabilization, we do not mean to suggest that other, perhaps more pressing considerations, should be excluded from monetary policy decisions in practice.4 Two sets of arguments are commonly made against directing monetary policy towards financial stabilization goals: that it is harmful, and that it is unnecessary. The first of these concerns, that it is harmful, is connected to the often-heard view that the credibility of the central bank may be harmed if pursuit of a financial objective is seen to undermine its ability to stabilize inflation. If attention is diverted from the inflation objective, so the argument goes, higher inflation volatility may lead agents to doubt the central bank’s target. Such doubts would be accentuated by limited public understanding of the more-complex policy framework that the ternary objective would entail (in particular, how the central bank trades off different objectives over time). Imperfect credibility is of real concern where monetary policy frameworks remain relatively new, or are under-developed, as is the case in some developing and emerging economies. On the other hand, countries with established inflation targeting frameworks routinely adopt a flexible approach that allows them to meet other objectives, as with the Norges bank and the Reserve Bank of Australia, mentioned earlier. We are not able to settle the matter in this paper, as our analysis is predicated on the assumption of perfect credibility on the part of the central bank. However, we note that the same strong assumption underpins virtually all assessments of policy tradeoffs, and that compelling arguments for singling out financial conditions as a special case for which it is particularly inappropriate are not readily apparent.

The second set of arguments, that a financial stabilization role is unnecessary, rests on the observation that macroprudential frameworks have become increasingly common over the past decade. Macroprudential policy aims to short-circuit cyclical up-swings of financial vulnerabilities, such as high leverage, which can lead adverse shocks to be amplified. Under a set of ideal circumstances, jointly optimal prudential policy can address financial frictions leaving monetary policy free to minimize the distortions caused by nominal rigidities (Collard and others, 2017). But as things stand, macroprudential frameworks remain incomplete in many jurisdictions. Where macroprudential tools are used, often their primary purpose has been to ensure ‘through the cycle’ resilience, rather than being adjusted for cyclical reasons.5 Further, macroprudential powers often rest outside of central banks, raising difficult issues of policy coordination that are addressed in other papers (De Paoli and Paustian, 2017; Laureys and Meeks, 2018).

We must also mention a limitation of our paper. Financial frictions may produce conditions that lead to discrete episodes of ‘crisis’, in which a collapse in credit and economic activity can occur even in the absence of large disturbances. Financial crises appear to be associated with a gradual build-up of financial vulnerabilities, and models in which this dynamic can play out have been developed in several studies (Boissay, Collard, and Smets, 2016; Gourio, Kashyap, and Sim, 2018). The problem of whether and how to use monetary policy to reduce the incidence of financial crises, often termed ‘leaning against the wind’, turns on a complex cost-benefit analysis (see Filardo and Rungcharoenkitkul, 2016; Svensson, 2017). However, the financial frictions we study do not give rise to financial crises. A fruitful way to think about the results in the present paper is therefore that they apply to a monetary policy regime in which macroprudential policy has ensured that the system has a sufficient level of through-the-cycle financial resilience to make crises irrelevant for second-order welfare calculations.

Related Literature

This paper contributes to the literature that studies optimal monetary policy in the presence of financial frictions. Two approaches have been used. The first is based on commitments to optimal simple instrument rules. This literature seeks to understand whether such rules should include systematic feedbacks on financial factors when their aim is to either maximize social welfare or to minimize an ad hoc loss function which reflects the central bank’s mandate. Cúrdia and Woodford (2010) find that while a Taylor rule augmented with variations in credit spreads can improve upon the standard Taylor rule, while a response to the quantity of credit is less likely to be helpful. Gambacorta and Signoretti (2014) report that a Taylor rule augmented with asset prices and credit can improve upon a standard Taylor rule. Gelain and Ilbas (2017) looks at optimal simple monetary and macroprudential rules together, and considers the gains that might be achieved from setting policy instruments in a coordinated manner.

The second approach is concerned with the analysis of optimal control policies when policymakers aim to maximize social welfare, or an approximation thereof. Monacelli (2008) and Fiore, Teles, and Tristani (2011) analyse the non-linear Ramsey problem, while other papers, including Carlstrom, Fuerst, and Paustian (2010), Fiore and Tristani (2013), and Andrés, Arce, and Thomas (2013) adopt the linear-quadratic (LQ) approach. The LQ approach makes use of an approximation to social welfare, that in simple models has the advantage of shedding light on what policymakers’ stabilization goals are. A consistent message from this branch of the literature is that a summary measure of financial conditions—for example, a lending spread, or the net worth of financially constrained agents—often appears to be of welfare relevance. (Precisely which measure depends on the nature of the frictions.) But it is also the case that after calibrating the models in question, the optimal weight on such measures frequently turns out to be (almost) inconsequentially small (Carlstrom, Fuerst, and Paustian, 2010; De Fiore and Tristani, 2013).6 In such cases, inflation volatility remains the principal source of welfare losses, as in standard textbook models, and strict inflation targeting remains (almost) optimal (Woodford, 2003). That quantitative result turns out to carry over to the case of models that include capital accumulation (Hansen, 2018).7

The analysis in this paper differs from the existing literature in two regards. First, this paper uses an alternative approach to address the question of optimal simple monetary policy design, as it considers whether it is welfare improving to assign a financial stabilization objective to monetary policy, alongside its traditional remit for inflation and output gap stabilization. Second, much existing work has considered small models that are analytically tractable, but which give an at-best stylized account of dynamics, and which assume fiscal measures are in place to ensure the economy has an efficient steady state. By contrast, this paper considers an economy in which multiple sources of real and nominal rigidity interact and that offers a coherent account of the data. The quantitative importance of financial frictions for the design of optimal monetary policy that we find within our framework confirms, in line with the findings by DKLN (2018) in a medium-scale DSGE model without financial frictions, that policy prescriptions based on small-scale models do not necessarily carry over to richer models.

Roadmap

The remainder of this paper is organized as follows. Section II sketches our DSGE model, with a full derivation appearing in Appendix A. Section III presents our estimates of the model’s parameters. The core of the paper is contained in Section IV, which sets out the approach to monetary policy design, and Section V, which presents the welfare results for the dual mandate, the ternary mandate, and robustness checks. Section VI concludes.

II. Model

The core of the framework we adopt is a standard New Keynesian model (for example, see Smets and Wouters, 2003, 2007), to which we have added financial intermediaries (’banks’) in the manner of Gertler and Karadi (2011). The remainder of this section provides a brief overview of the main features of the model, while the full details of its derivation are provided in Appendix A.8 To close the model, the behavior of monetary policy also needs to be specified. This is discussed in Section IV.

A. Households

There is a continuum of households indexed by j ∈ [0,1]. Each household j chooses consumption Ct(j) and deposits Dt(j), so as to maximize a standard utility function U separable in consumption and hours worked Lt(j):

𝔼tt=0βtψtb{ln[Ct(j)hCt1(j)]χLt(j)1+φ11+φ}

subject to the budget constraint:

Ct(j)+Dt(j)=Wt(j)Lt(j)+Rt1Dt1(j)+Tt(j)+Πt(j)

where Rt-1 is the gross real return from period t – 1 to t, Wt is the real wage, Tt is lump sum taxes, Πt is the net profit from ownership of both firms and banks, and ψtb is an inter-temporal preference shock that follows an AR(1) process.

B. Production

There are three types of firms in the economy: intermediate good producers, capital producers, and retailers. Intermediate good producers use capital and labor as input to produce goods that are used as input by retailers. Those retailers in turn produce differentiated retail goods, which end up being packaged into final goods. Capital producers use final goods to produce capital.

Intermediate good producers

These firms operate in a perfectly competitive market and produce goods using a technology represented by the production function Yt=At(UtKr1)α(Ltd)1α, where Ut is the utilization rate, Kt-1 is the amount of capital used in production at time t, Ltd is labor input, and At is an aggregate technology shock that follows an AR(1) process.

Using an end-of-period stock convention, the timing of events runs as follows: at time t – 1, firms acquire capital Kt-1 for use in production the following period. In order to finance the capital purchases each period, firms obtain funds from banks against perfectly state-contingent securities St-1, at a price of Qt-1 per unit. They face no frictions in obtaining these funds. At the start of time t, shocks are realised. Firms choose the amount of labor input Ltd, and how hard to work their machines (their utilization rate Ut ). After production in period t, they sell back the capital they have used to capital goods firms: undepreciated capital is sold back at the price Qt; depreciated capital is gone.

Conditional on their choice of capital, the firm’s profit maximization problem at time t is thus:

maxLt,UtPm,tPtAt(UtKt1)α(Ltd)1αa(Ut)Kr1WtLtd

where Pm,t is the price of the intermediate goods, Pt is the price of final goods, and a(Ut) are the utilization costs of capital expressed in terms of final goods.9

Capital producers

Capital producing firms are owned by households and operate in a perfectly competitive market. They take It units of final goods and transform them into new capital goods according to the technology:

Kt=[1δ]Kr1+ψtx[1κ2(ItIt11)2]It

where δ is the capital depreciation rate, and ψtx is an investment-specific technology shock that follows an AR(1) process. The capital producers sell the newly build capital to the intermediate good producers at price Qt. The latter is determined endogenously because of investment adjustment costs. The objective of a capital producer is to choose It such as to maximize the present value of expected profits:

𝔼tk=0Λt,i+k{Qt+kψt+kx[1κ2(It+kIt+k11)2]It+kIt+k}

where Λt,t+k ≡ βkUc,t+k/Uc,t is the household’s stochastic discount factor.

Retailers

There is a continuum of retailers indexed by r ∈ [0,1] that are owned by households and operate in a monopolistically competitive environment. Each retailer r produces a differentiated good by transforming one unit of intermediate output into one unit of retail output. These differentiated retail goods are packaged by goods aggregators into a composite, i.e. the final good, using a CES production function. Profit maximization by the goods aggregators, who operate in a perfectly competitive market, gives rise to the following demand for each variety of retail good r:

Yt(r)=(Pt(r)Pt)εYtd(1)

where:

Ytd=[01Yt(r)ε1εdr]εε1andPt=[01Pt(r)1εdr]11ε

The prices of retail goods can be reset in each period with probability 1 – γ. When they cannot be reset, they are partially indexed to past price inflation. Retailers choose their price Pt*(r) so as to maximize their profit:

maxPt*(r)𝔼tk=0γkΛt,tk{(Xt+kPt*(r)Pt+kMCt+k)Yt+k(r)}

subject to the demand for retail good r given by Eq. ((1)), and where MCt = Pm,t /Pt is the real marginal cost, and:

Xt+k{Πs=1kΠt+s1lifk11ifk=0

where Π ≡ Pt/Pt-1. After solving the problem a price mark-up shock, which follows an ARMA(1,1) process, is introduced.10

C. Labor market and wage setting

The labor input of the intermediate good producers is a CES composite of household labor types. Labor aggregators, who operate in a perfectly competitive market, hire the labor supplied by each household j, package it, and sell it to the intermediate goods firms. Profit maximization by the labor aggregators gives rise to the following demand for each type of labor j:

Lt(j)=(Wt(j)Wt)εwLtd(2)

where:

Ltd=[01Lt(j)εw1εwdj]εwεw1andWt=[01Wt(j)1εwdj]11εw

Nominal wages are sticky and can be reset in each period with probability 1- γw. When they cannot be reset, they are partially indexed by past price inflation. Each household j chooses their wage Wt*(j) such as to maximize their utility:

maxWt*(j)𝔼tk=0(βγw)k{Uc,t+k(j)Xt+k~Wt*(j)Lt+k(j)ψt+kbχLt+k(j)1+φ1+φ}

subject to the demand for their labor type (equation (2)), and where:

X˜t+k{Πs=1kΠt+s1lwΠt+sifk11ifk=0

After solving the problem a wage mark-up shock, which follows an ARMA(1,1) process, is introduced.11

D. Banks

The banking sector is modelled following Gertler and Karadi (2011). Banks are special in this economy as bank deposits are the sole vehicle for direct household saving, and bank loans are required by intermediate good firms for the purchase of capital. On the asset side of their balance sheets, each bank i holds state-contingent claims on capital employed by firms (’primary securities’, denoted by St (i)) which have mark-to-market value Qt (also the relative price of capital goods). They fund their assets with deposits obtained from households Dt (i), and their own internal net worth Nt(i). Their balance sheet identity at the end of period t is therefore: Qt St (i) = Dt(i)+Nt(i).

Over time, the bank accumulates net worth from the spread earned between returns on assets and the risk-free interest paid on deposits. So net worth can be expressed as:

Nt(i)=(Rs,tRt1)Qt1St1(i)+Rt1Nt1(i)

where Rs,t is the gross return on a unit of the bank’s assets from period t – 1 to t, given by the return on capital.

Banks are ultimately owned by households and run by household members known as ‘bankers’. When they start a bank, bankers receive a transfer of resources from their ‘home’ household in proportion ξ to existing bank assets, which forms their initial inside stake in the enterprise. Bankers are replaced by ‘new management’ with probability (1 – σ) each quarter to avoid that over time they build up sufficient net worth to fund all investment. Upon exiting, bankers transfer their accumulated funds back to the home household. Therefore, the banker’s objective is to choose the size of its balance sheet so as to maximize the expected present value of the future payout to the home household:

Vt(i)=max𝔼tk=0(1σ)σkΛt,t+1+k[Nt+1+k(i)]

However, in choosing how much to lend the bank is constrained by the behavior of depositors.12 They place limits on the quantity of deposit funding they are willing to extend because they are aware that bankers can take a hidden action to divert resources for their own benefit, an action which will result in the bank going out of business. The extent of the private benefits bankers can enjoy is proportional to the overall size of their balance sheet. Incentive compatibility on the part of bankers requires that the ‘going concern’ value of the bank (V)—the expected present value of the bank if it remains in business—exceeds the private liquidation value of the bank:

Vt(i)θtQtSt(i)

The parameter θt determines fraction of the bank’s value that can be ‘diverted’ by the banker. We allow it to vary according to a stationary AR(1) process. In equilibrium, the incentive constraint binds, implying that the bank’s balance sheet is constrained by its net worth:

QtSt(i)=ϕtNt(i)

The leverage ratio of the bank, φt , depends endogenously on the current state of the economy. Finally, after aggregating over continuing and entering bankers, banking system net worth can be shown to evolve as:

Nt=(σ+ξ)Qt1St1Rs,tσRt1Dt1

(see Appendix A.4).

E. Market clearing conditions and aggregation

The market clearing conditions for the economy are:

QtKt=QtStYtd=YtΔtp=Ct+It+a(Ut)Kt1+GtLtd=LtΔtw

where Δtp is a measure of price dispersion, Δtw is a measure of wage dispersion, and Gt is a government spending shock that follows an AR(1) process.

III. Data and Estimation

We estimate the model set out in Section II using macroeconomic data from the area-wide model database (Fagan, Henri, and Mestre, 2005), and financial data from ECB’s Statistical Data Warehouse and Bank of America Merrill Lynch. Our sample period runs from 1980Q1 to 2016Q4. We apply a number of data transformations prior to estimation, summarized in Table 1. The macro data for GDP, consumption, and investment are transformed to a per-capita basis by dividing them by the labor force. As the euro area does not have data on hours worked, a variable that appears in the model, we use employment (expressed as a proportion of the population) as an observable instead, and map it into hours worked using the same approach as Smets and Wouters (2003).13 Wages are deflated with the GDP deflator. We follow Smets and Wouters (2003) by linearly detrending real variables. We linearly detrend the nominal rate and inflation separately, to take into account the fall in trend inflation and the neutral real interest rate during the sample.14 To close the model for the purposes of estimation, we assume monetary policy is conducted according to a standard Taylor rule.

Table 1.

Data series, data transformations, and assumptions on measurement error

article image
Notes: Sample runs 1980Q1 to 2016Q4, except for the high yield bond spread, which is quarterly from 1998Q1. A ∆ in the transform column indicates a first difference.

The statistical model features eight orthogonal structural shocks, and three measurement errors (explained below). The structural shocks affect (i) total factor productivity (TFP), (ii) preferences, (iii) investment-specific technology, (iv) government spending, (v) final goods price markups, (vi) wage markups, (vii) bank funding, and (viii) monetary policy.15 All the shocks follow AR(1) processes, except the two markup shocks which are driven by ARMA(1,1) processes. Following Boivin and Giannoni (2006), we map multiple observable series into certain model counterparts.16 Specifically, we allow both HICP and GDP deflator inflation to map into inflation (Π) in the model, and bank lending rates, and the yields associated with two corporate bond indexes to map into banks’ return on assets (Rs). The structure imposed upon these observable variables therefore implies two common factors (one for each group) and three idiosyncratic disturbances (’measurement errors’) (one for each variable minus the number of common factors). Additionally, we estimate loading factors as the yields and bank lending rates have different volatilities. This largely reflects the differences in the underlying assets, from bank loans which are often collateralized, to volatile high-yield corporate bonds.17

We calibrate parameters that are poorly identified, or primarily determine the steady state of the model (Table 2). The capital share α, the steady state labor supply, and the inverse of the Frisch elasticity of labor supply are set to the literature standard values of 0.33, 1/3 and 1, respectively18. The depreciation rate δ is set to .178 so that the steady state investment to GDP ratio is equal to the sample average.19 Likewise, the sample average of government spending and net exports to GDP ratio pins down the steady state G. The elasticities of substitution in the goods and labor markets are set to 4.33 to match a 30% steady state markup, and the banks’ survival rate σ is calibrated so the average time taken to disburse the bank’s net worth is 16 quarters. The sample average of the real interest rate is of 2.48% is used to calibrate the discount factor β, and the sample average for investment-grade corporate bonds determines the steady state spreads. Details of the steady state computations may be found in Appendix C.

Table 2.

Calibrated parameters and steady state targets

article image
Notes: Appendix C contains detailed expressions for the steady state.

To obtain estimates for the remaining parameters, we apply the Bayesian maximum likelihood techniques described in the textbook treatment of Herbst and Schorfheide (2016). This requires us to specify prior distributions for the estimated structural parameters, and these are described in Table 3. The priors mostly follow Smets and Wouters (2003). The posterior distributions were obtained via MCMC, using two chains consisting of 300,000 iterations each and burning the first 150,000.20 Noteworthy differences between our posterior estimates and those of Smets and Wouters (2003) include the lower values seen for price and wage stickiness, and indexation. This difference could arise from the choice of ARMA(1, 1) structure of the price and wage mark-up shocks, whereas they use an i.i.d. structure. We elect to use this, as the additional MA term better captures high-frequency movements (Smets and Wouters, 2007), but at the same time allows for some persistence through the AR component (which absorbs some of the persistence through indexation in Smets and Wouters (2003)). This enables us to differentiate the persistence of cost-push shocks to inflation, against ‘intrinsic’ inflation persistence (Fuhrer, 2006). As inflation persistence affects the output-inflation stabilization trade-off, correctly differentiating it is important for our optimal policy exercises.

Table 3.

Prior and posterior distributions of estimated parameters

article image

Certain estimated parameters require additional discussion as they do not feature in Smets and Wouters (2003). The estimate for the steady state leverage ratio S/N is key. It determines the steady state value of θ, which determines banks’ ability to pledge assets to their creditors, and the parameter ξ, which (together with the survival probability σ) pins down the net rate of transfer of resources between banks and households (see Eq. (C.1)-(C.1)). We chose a prior leverage ratio of 4, as in Gertler and Karadi (2011) and Meeks, Nelson, and Alessan-dri (2017). This may appear low compared to the accounting ratios typically reported by euro area banks. However, the discrepancy is reduced by focusing on real economy credit components of the balance sheet, and considering the narrow definition of regulatory capital used in published leverage ratio calculations. Our posterior estimate of steady state leverage is 2.2, which is the value that best accounts for the variability of lending spreads seen in the data.

IV. Monetary Policy Design

The statutory objectives set for central banks are often defined in broad terms.21 To be made operational, such objectives must be translated into quantitative form. Once a quantitative target has been set, policymakers must also choose how to regard deviations from their targets. When objectives are stated as symmetric around the target, it is natural to formalize the central bank’s mandate as a quadratic loss function.22 In what follows, we will be concerned with determining the weights that should be attached to individual terms in the loss function from a welfare perspective. These weights will be said to constitute the central bank’s mandate, as they capture the principal’s stabilization preferences by quantifying the rates at which the central bank should trade off stabilization of one objective against another.

Formally, a mandate will be defined by the vector of coefficients M = (λ,ω) in the per-period loss function:

Lt(M)=π^t2+λx^t2+ωf^t2(3)

where π^t denotes (annualized) price inflation, x^t is a measure of resource utilization, and f^t is a financial variable (to be elaborated on below), all in terms of log-deviations from steady state. The first two terms are standard, and carry standard weights: that on inflation is normalized to 1; then λ ∈ [0,∞) captures policymakers’ relative preference for resource utilization versus inflation stabilization (see Clarida, Galí, and Gertler, 1999). The final term in ‘financial volatility’, and its weight ω ∈ [0,∞) are non-standard, and will be the focus of our attention in much of what follows. The presence of a non-zero ω in the central bank’s loss function implies a preference for smoothing the path of some financial factor as a policy goal in itself.

To provide quantitative guidance on the welfare-optimal design of monetary policy, we need to compare the performance of alternative mandates. Ranking alternative mandates requires a common yardstick, which we take to be social welfare. To compute social welfare, we follow the approach taken by DKLN (2018). First, we derive a purely quadratic approximation of the representative household’s utility by applying the method of Benigno and Woodford (2012). The approximation is given by:

𝔼0t=0βt[U(Xt)]12𝔼0t=0βt[Xt'WXt]+t.i.p.(4)

where Xt is a vector of model variables, including relevant leads and lags, WH is a matrix of welfare weights for the representative household that potentially depends upon all the parameter values of the model, and ‘t.i.p.’ denotes a constant term independent of policy.

The values taken by the economic variables Xt depend upon the monetary policy P put in place by the central bank. We think of P as a function of Xt that is a choice object for policymakers, and which closes the model—for example a simple instrument rule, or a targeting rule for optimal policy (Svensson, 2003). The average, or unconditional expected, loss for households under a policy P based on Eq. ((4)), is given by:23

LossP12𝔼[Xt(P)'WHXt(P)]+t.i.p.=12trace[WHΣ(P)]+t.i.p.

where the second line follows because trace (x'Ax) = (Axx’) and the observation that both trace[•] and 𝔼[P] are linear operators. The term Σ(P) is the variance-covariance matrix of the model variables under the policy P.

The period utility of the central bank, given its mandate M and an arbitrary policy P, can be expressed as:

Lt(M)Xt(P)WMXt(P)

(since its utility is quadratic), and where by assumption WM is very sparse (’simple’). From now on, we will assume the central bank’s policy is set optimally under commitment (the ‘optimal control policy’). That is, the central bank selects its policy P, and therefore {Xt(P)}t=0, so as to minimise the loss associated with its mandate M :

𝔼0t=0βt[Xt(P)WMXt(P)]

subject to the linearized equilibrium conditions of the decentralized allocation (Eq. ((8))-((34))). The M -optimal control policy is denoted P.

We now wish to evaluate the performance of mandate M in terms of social welfare. We do this by effecting a comparison between the M -optimal policy, and the Ramsey optimal (or H -optimal) policy. In general, the policy P that is optimal for M will not coincide with R, the policy that is optimal from the viewpoint of social welfare, since WM and WH do not coincide. The difference in social welfare between R and P is given by:

LossLoss=12trace[WΣ(R)]12trace[WΣ(P*)]

This difference cannot be positive, since the Ramsey policy by construction produces the best achievable social welfare outcome. The more negative it is, the worse the performance of the central bank’s mandate in terms of social welfare. Throughout the paper, we express the under-performance of a mandate in terms of ‘consumption equivalent variation’ (CEV) units, the percentage reduction in households’ lifetime consumption that they would need to suffer in order to leave them indifferent between the allocation under mandate M and the Ramsey allocation R.

V. Quantitative Results

A. Dual mandate

We turn now to our quantitative assessment of optimal policy in the model economy set out above. An interesting special case is that of the dual mandate. The dual mandate directs the central bank to stabilize both inflation and the real economy, which corresponds to a mandate MIT = (λ,0). Such arrangements are typical in flexible inflation targeting (IT) regimes and, because the Federal Reserve explicitly has a dual mandate, was the focus of investigation in DKLN (2018). Looking at the dual mandate afresh in the context of a model with financial frictions, and estimated on euro area data, therefore provides a useful point of comparison with their work.

Fig. 1 plots the welfare losses of a dual mandate relative to Ramsey, expressed in CEV, as function of policymakers’ relative preferences over inflation versus output gap stabilization. The figure shows that strict inflation targeting (no weight on output gap stabilization λ = 0) performs poorly relative to Ramsey, with welfare losses amounting to 1% of consumption. Extreme dovishness (a high weight on output gap stabilization, λ ≫ 1) is also suboptimal, albeit to lesser extent. However, welfare increases when policymakers pursue a more balanced approach. Losses are minimized when the value of λ is 1.005—very close to the optimal value reported by DKLN (2018) of 1.042 for the Smets and Wouters (2007) model. That a ‘balanced mandate’ which places equal weight on inflation and output gap stabilization produces welfare outcomes that are close to those of a policymaker who directly aims at maximising welfare is a clear departure from the case of the prototypical small-scale New Keynesian model.24

Figure 1.
Figure 1.

Welfare losses of a dual mandate relative to Ramsey

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The figure shows the welfare losses of a dual mandate relative to Ramsey, expressed in CEV units (%), as a function of policymakers’ preferences over output gap stabilization (λ). The black diamond shows the results under the optimal dual mandate (minimum loss).

The factors underlying the balanced mandate result may be understood by examining policy frontiers (or ‘Taylor frontiers’, for John B. Taylor, 1979) for welfare-relevant variables. Policy frontiers trace out the best achievable (minimum) levels of volatility in a pair of variables for different values of λ.25 Fig. 2 shows the trade-off between inflation and output gap stabilization. The frontier takes the conventional convex shape, and is particularly unfavorable—in the sense that a given reduction in inflation volatility requires a larger increase in output gap volatility—for values of λ below the optimal λ = 1.005 (diamond). It is noteworthy that the Ramsey outcome (square) lies well inside the efficient frontier, indicating that that particular combination of output gap and inflation volatility would be achievable, but is not desirable, under mandate MIT. This follows because the Ramsey policy takes account of all the welfare-relevant sources of volatility in the economy, not only those related to inflation and the output gap. The structure of the economy implies that reducing volatility elsewhere necessarily entails higher volatility of inflation and the output gap.

Figure 2.
Figure 2.

The monetary policy frontier under a dual mandate

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The figure shows the best achievable combinations of the standard deviation of the output gap (x-axis) and the standard deviation of inflation (y-axis) for preferences for output gap stabilization (λ) in the monetary authority’s dual mandate. The values of λ range from 0 (strict inflation targeting) to 2.4 (extreme dovishness). The black diamond shows the result under the optimal dual mandate. The black square shows the result under Ramsey.

Output gap stabilization is desirable because the output gap turns out to be a good proxy for the stabilization of other welfare-relevant variables. As is well known, volatility in nominal wages is important for welfare because households dislike the resulting variability in their labor supply (Erceg, Henderson, and Levin, 2000). Fig. 3 panel (b) shows that the volatility of wage inflation decreases (almost) monotonically with that of the output gap as λ is increased towards λ (i.e. as inflation targeting becomes more ‘flexible’). As a result, the lower welfare losses that can be achieved by putting more weight on output gap stabilization, relative to price inflation, are not only driven by a reduction in output gap volatility but also by a reduction in the volatility of nominal wages. DKLN (2018) report the same finding in the Smets and Wouters (2007) model, and our parallel result—which is largely expected given the similar nominal frictions—partly accounts for the similar optimal policy prescriptions reported above.

Figure 3.
Figure 3.

The volatility of welfare-relevant variables under a dual mandate

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: Panel (a) on the left plots the volatility of the loan-to-deposit spread against the volatility of the output gap(x-axis) as the policymaker’s weight on her output gap stabilization objective (λ) varies. Panel (b) on the right plots the same information for nominal wage inflation. The values of λ range from 0 (strict inflation targeting) to 2.4 (extreme dovishness). The black diamond shows the results under the optimal dual mandate. The black square shows the results under Ramsey.

The welfare costs associated with imperfections in the credit market are also relevant. A key indicator of the extent of financial frictions on banks is the loan-to-deposit spread. Higher spreads imply a higher shadow price on the constraint, or equivalently a higher marginal value of net worth. In turn, the tightness of the financial constraint imposes welfare costs to the extent that it distorts the allocations of labor and capital in production. Fig. 3 panel (a) shows that for almost all values of λ, excluding the case of near-strict inflation targeting, output gap and credit spread stabilization go hand-in-hand. In this case, there is no trade-off between them.

Summary of findings on the dual mandate

In summary, the dual mandate focuses stabilization efforts on the output gap and inflation. The structure of the economy implies a trade-off between these variables. Strict inflation targeting produces poor welfare outcomes in this rich and empirically coherent model, in spite of being near-optimal in a simplified textbook environment. As in DKLN (2018), we find that an almost perfectly ‘balanced’ dual mandate performs best. Ramsey policy aims to stabilize many variables, but under the dual mandate the policymaker’s focus is on inflation and the output gap alone. Although fluctuations in wage inflation and credit spreads correlate with those in the output gap, the relationship is imperfect. As a result, even the optimal dual mandate is materially inferior, in welfare terms, to the Ramsey policy.

B. A financial extension to the mandate

The central question addressed by this paper is whether a simple monetary policy aimed at maximising welfare can safely disregard financial frictions present in the banking system (over and above their effect on inflation and output volatility), which is the position in most IT regimes. In this section we demonstrate that it cannot. We compare the welfare performance of the optimal dual mandate MIT with that of a mandate that includes a ternary objective. The extended mandate we consider, denoted MFF = (λ,ω) for ‘financial frictions’, places a non-zero weight on the loan-to-deposit spread.26

The optimal extended mandate leads to notably smaller welfare losses relative to Ramsey than the optimal dual mandate. Table 4 indicates a loss in CEV terms of 0.05% under MFF versus 0.13% for MIT. In our economy, the presence of financial frictions makes lending spreads welfare-relevant, and an explicit objective of spread stabilization so brings welfare benefits. The relative weight on inflation versus output gap stabilization differs notably between the mandates: In the extended mandate, the optimal λ is more than halved from λ = 1.005 to λ = 0.475.27 Intuitively, from a welfare viewpoint the need to lower output gap volatility under an extended mandate is smaller than under the dual mandate, because at least one of the welfare-relevant variables that is stabilized via the output gap is now stabilized directly. An immediate practical benefit of the extended mandate is therefore that it down-weights the importance of the output gap, a variable that can be challenging to measure in real time.28

Table 4.

Performance of optimal dual and extended mandates

article image
Note: Shown are the coefficients of the optimal (welfare-maximising) dual mandate MIT=(λ,0) and the optimal extended mandate MFF*=(λ*,ω*), along with their respective welfare losses relative to the Ramsey policy, expressed in CEV units (%). For reference, the relative loss in welfare under the estimated Taylor rule (without monetary policy shocks) is -0.179% CEV.

The result in Table 4 poses something of a challenge to standard inflation targeting practice. It says that an interest rate spread is monetary policy relevant not simply for the conventional reason that it helps to predict fluctuations in inflation or output, but as a welfare-relevant stabilization goal in itself. How robust is that finding? Fig. 4 displays the levels of CEV that resuit from combinations of λ and ω. CEV outcomes that do not differ more than 0.05% have like shading, and lighter colors indicate smaller welfare losses.29 The optimal extended mandate MFF*=(λ*,ω*) is given by the black circle in the left-centre of the figure. There are three main findings: (a) For a wide range of values of λ, welfare outcomes can be improved by moving from a dual mandate (ω = 0) to an extended mandate (ω) > 0);30 (b) The finding that extreme hawkishness, and to a lesser extent extreme dovishness, lead to large welfare losses relative to Ramsey carries over from a dual to an extended mandate; And (c) welfare outcomes superior to those obtained under the optimal dual mandate can be achieved for a large set of extended mandates.

Figure 4.
Figure 4.

Welfare losses under extended mandates

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The figure shows the welfare losses of a mandate of the form π^t2+λx^t2+ωf^t2 relative to Ramsey, expressed in CEV units (%), as a function of λ (x-axis) and ω (y-axis). Combinations of λ and ω that provide welfare outcomes in CEV units that do not differ more than 0.05% are depicted in the same color.

The message contained in Fig. 4 for the design of monetary policy is very clear: In the presence of financial frictions, the dual mandate can be improved upon by extending the central bank’s mandate to include a financial stabilization objective. Remarkably, once such an extended mandate is in place, policymakers’ preferences over output gap stabilization and financial stabilization are, within a broad set of parameter values, more-or-less irrelevant for welfare outcomes. It can be seen that for values of ω in excess of 0.2 or so, welfare outcomes are not significantly affected by further increasing ω. At the same time, similar welfare outcomes can be achieved for values of λ roughly between 0.2 and 1.

Quantifying the macro-financial trade-off

Insights into the mechanism behind our findings can again be gleaned by examining Taylor frontiers, but now focusing on the macro-financial trade-off. We present two sets of frontiers. In the first (Fig. 5), the weight placed on output gap stabilization (λ) is held fixed, while the weight on financial stabilization (ω) varies. In the second (Fig. 6), ω is held fixed while λ varies, just as in Fig. 2. These two alternative views of the macro-financial volatility help us to interpret the three-way trade-off that gives rise to the welfare surface in Fig. 4.

Figure 5.
Figure 5.

Policy frontiers for the financial stabilization objective

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The panels show the best achievable combinations of volatilities in the named variables that can be achieved for alternative weights on financial stabilization (ω) in the extended mandate. The weight on output gap stabilization (λ) is kept fixed at its optimal value under the extended mandate (λ* = 0.475) in the pink line, and at its optimal value under the dual mandate = 1.005) in the blue dotted line. The values of ω range from 0 (dual mandate) to 3.5. The black dot indicates the optimal extended mandate. The black square indicates the Ramsey outcome.
Figure 6.
Figure 6.

Policy frontiers for the output gap stabilization objective

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The panels show the best achievable combinations of volatilities in the named variables that can be achieved for alternative weights on output gap stabilization (λ). In the blue dashed line, the weight on financial stabilization ( ω) is kept fixed at 0. In the red line, the value of ω is kept fixed at its optimal value under the extended mandate ( ω* = 1.397). The values of λ range from 0 to 2.4. The black diamond indicates the optimal dual mandate. The black dot indicates the optimal extended mandate. The black square indicates the Ramsey outcome.

Our principal observation is that the macro-financial trade-off is modest: monetary policy can reduce credit spread volatility considerably with a limited impact on macroeconomic volatility. Consider the case of inflation volatility (Fig. 5, panel a). Along the magenta line, the value of λ is fixed at its optimal value under the extended mandate (λ = 0.475). Putting more stabilization weight on spreads (increasing ω) necessarily reduces their volatility, but it also increases inflation volatility. However, whereas spread volatility is reduced from near 2% to 0.5%, the corresponding rise in inflation volatility is barely 0.1%. Fig. 5 (panel b) presents a similar picture for the output gap: When ω is larger than the rather low threshold value of 0.065 (versus ω* = 1.397), output gap volatility rises very little as spread volatility falls. The value of λ is of little consequence; the blue line shows that the, macro-financial trade-off (i.e. the slope of the respective policy frontiers) is near-identical when λ is held at its dual mandate level (λ* = 1.005).

Another perspective on the same policy trade-off is that the inflation-output frontier shifts out—becomes less favorable—under the optimal extended mandate, but that the shift is modest. Fig. 6 (panel a) compares the dual mandate case (ω = 0) with the optimal ternary mandate (ω = ω*). The frontier for the extended mandate case lies within the set of output-inflation outcomes that were feasible under the dual mandate.

Our second observation is that the structure of the economy places a limit on how far policy can reduce financial volatility. Although even a very small increase above zero in the weight placed on spreads in the central bank mandate results in a material reduction in volatility, for ω > ω* subsequent gains are nearly nil. This observation follows from comparing the optimal mandate (Fig. 5, black dot) with an alternative that has ω = 3.5 (x symbol). It helps explain why similar welfare outcomes are achieved for sufficiently high values of ω (Fig. 4)—they correspond to similar levels of financial volatility.

The welfare gains that are achieved from lowering the value of λ in the optimal extended mandate are driven by having lower inflation volatility, at the cost of higher output gap volatility. Because the value of λ has almost no effect on spread volatility once some stabilization weight is placed on them, the optimal λ depends primarily on the standard monetary tradeoff. This observation suggests that the finding that welfare outcomes under an extended mandate can be roughly invariant for a wide range of λs (as shown in the light region in Fig. 4) is driven by a range of combinations of inflation and output gap volatility leading to similar welfare losses.

Finally, financial volatility under the optimal extended mandate is close to that under the Ramsey policy This can be seen by comparing the black dot and the black square in Fig. 6 (panels b and c). Spreads are also less volatile than under the dual mandate (black diamond). However, this ranking does not by itself imply that it is desirable from a welfare point of view for monetary policy to stabilize credit spreads. The central bank’s mandate contains only a subset of the variables that are welfare relevant, and if the macro-financial trade-off were large, the reduction in credit spread volatility to the level of Ramsey might lead to more inflation and output gap volatility than is optimal from the point of view of the central banker who only cares about these three variables. As it happens, in our case the trade-off is small, and it is optimal to reduce spread volatility.

Summary of findings on the extended mandate

Mandating monetary policymakers to stabilize a financial variable, alongside their traditional objectives of inflation and output gap stabilization, is desirable from a welfare standpoint.

Even a small weight on a financial objective results in an improved welfare outcome, relative to the dual mandate. Because the structure of the economy generates only a modest macro-financial trade-off for policymakers, the gains from reducing financial volatility under such an ‘extended’ mandate more than offset the losses suffered as a result of increased macroeco-nomic volatility. Under the extended mandate, policymakers optimally down-weight output gap fluctuations, as they are an imperfect proxy for welfare-relevant fluctuations in financial frictions.

C. Discussion of findings

1. The role of measures in the mandate

So far the focus has been on a mandate in which the output gap is included as the measure of resource utilization and the (ex-ante) credit spread as the financial variable. In this section, we consider the implications of including alternative measures that have also been discussed in the context of ad hoc loss functions. In particular, we analyze a mandate that (i) includes output growth instead of the output gap as a measure of resource utilization, and/or (ii) includes leverage instead of the (ex-ante) loan-deposit spread as a financial variable.31 Results are reported in Table 5.

Table 5.

Performance of optimal dual and extended mandates for alternative measures in the mandate

article image
Note: This table shows the optimal (welfare-maximising) dual mandate and the optimal extended mandate along with their respective welfare losses relative to the Ramsey policy, expressed in CEV units (%). Two alternative measures of resource utilization are considered: the output gap and (annualized) output growth. Two alternative financial variables are considered: the (ex-ante) loan-deposit spread and leverage, defined as QtSt/Nt .

An optimal dual mandate with output growth instead of the output gap as a measure of resource utilization still outperforms strict inflation targeting in terms of welfare, as can be seen from Table 5. Also, as shown in the top left panel of Fig. 7, the finding that extreme hawk-ishness and to a lesser extent extreme dovishness are suboptimal is robust to the measure of resource utilization considered.

Figure 7.
Figure 7.

Dual mandate with alternative resource utilization measures

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: Each panel in this figure plots the value of a variable of interest (y-axis) that can be achieved for alternative values of λ in the monetary authority’s dual mandate (x-axis). The variable of interest on the y-axis is CEV(%) in the top left panel, the st. dev. of (annualized) price inflation in the top right panel, the st. dev. of the output gap in the bottom left panel, and the st. dev. of the (ex-ante) credit spreads in the bottom right panel. The blue solid line shows the values that can be achieved under a mandate that includes the output gap as a measure of resource utilization, and the solid black diamond shows the results under such an optimal dual mandate. The dashed green line shows the values that can be achieved under a mandate that includes (annualized) output growth as a measure of resource utilization, and the unfilled black diamond shows the results under such an optimal dual mandate.

The welfare performance of the optimal dual mandate does, however, depend on the measure of resource utilization included in the mandate. In particular, we find that the optimal dual mandate that includes output growth performs better despite output gap volatility being higher compared to the case in which it is included in the mandate, as can be seen from the bottom left panel of Fig. 7. This finding, which is in contrast to the finding of DKLN (2018) for the Smets and Wouters model, can be explained as follows. As discussed in Section V.A, the welfare relevance of the volatility in resource utilization also comes from it being a proxy for other welfare relevant variables. The bottom right panel of Fig. 7 shows that output growth volatility is a better proxy for credit spread volatility than output gap volatility: a substantial reduction in credit spread volatility can be achieved for low values of λ.

A lower value of λ in turn, reduces inflation volatility.32 As can be seen from the top right panel, under the optimal λ, inflation volatility is lower than when the output gap is included in the mandate. The lower inflation and credit spread volatility in turn contribute to having a smaller welfare loss relative to Ramsey,

The finding that the welfare loss relative to Ramsey is smaller under an optimal extended mandate than under an optimal dual mandate holds for the different measures of volatility in resource utilization and financial volatility considered, as can be seen from Table 5.33 The difference in CEV between an optimal dual and an optimal extended mandate is, however, only larger than the 0.05% threshold when output gap volatility is the measure of resource utilization. This finding reflects that output growth volatility is a better proxy for the welfare-relevant measures of financial volatility than output gap volatility, as discussed in the previous paragraph. As a result, including the stabilization of such a financial variable as an objective in and of itself does little to further reduce welfare losses.

In summary, in the presence of financial frictions the welfare performance of the dual mandate also depends on the extent to which the measure of resource utilization considered is a good proxy for the welfare-relevant measures of financial volatility. The welfare gains that can be achieved from moving from an optimal dual to an optimal extended mandate, also depend on that. Conditional on a measure of resource utilization, similar welfare outcomes are achieved for the two financial measures considered.

2. The role of shocks

In this section, we focus on the shocks to aim to provide further insight into why an optimal extended mandate outweighs a dual mandate. Table 6 reports the welfare losses relative to Ramsey under both the optimal dual and extended mandate for alternative assumptions about the type of shocks that are affecting the economy.

Table 6.

The role of shocks

article image
Note: This table shows the optimal dual and optimal extended mandate for different assumptions about the shocks, along with their respective welfare losses relative to the Ramsey policy, expressed in CEV units (%). In the “All shocks”-case the variance of each shock is set to its estimated value; in the “No financial shocks”-case, the variance of the financial shock is set to zero while the variances of the remaining shocks are kept at their estimated value; in the “No inv. specific tech. shocks”-case, the variance of the investment specific technology shock is set to zero while the variances of the remaining shocks are kept at their estimated value; and in the “No markup shock”- case, the variances of both the price and wage markup shock are set to zero, while the variances of the remaining shocks are kept at their estimated value. The measure of resource utilization in the mandate is the output gap, and the financial variable in the mandate is spreads.

First of all, the results in Table 6 show that the presence of financial shocks is not the reason why the optimal extended mandate leads to significantly better welfare outcomes than the optimal dual mandate. The difference in welfare outcomes between the optimal dual and extended mandate remains roughly unchanged in their absence. So what matters for the superiority of the extended mandate is the endogenous propagation of the standard shocks, rather than financial shocks being an important driving force. Also in the absence of investment specific shocks, which could be interpreted as financial shocks (Justiniano, Primiceri, and Tam-balotti, 2011), welfare outcomes under and extended mandate are notably better than those under a dual mandate. In sum, the superiority of the extended mandate appears insensitive to shocks resembling financial disturbances, even if the particular financial disturbance we have in mind is too small to really be informative.

Next, we turn to the role of inefficient mark-up shocks. Welfare gains of an extended mandate compared to a dual mandate are particularly pronounced in the presence of inefficient mark-up shocks. In the absence of such shocks, welfare outcomes under the optimal extended mandate are still better than under the optimal dual mandate, but differences are small given the notably improved welfare performance of the dual mandate. The latter can be explained as follows. As discussed in Section V.B, in the case of a dual mandate the output gap serves as a proxy for the stabilization of other welfare relevant variables, making it welfare optimal to increase the weight on that variable in the mandate (λ) even if it leads to increased inflation volatility. Importantly, in the absence of mark-up shocks there is a substantial reduction in the trade-off between inflation and the output gap stabilization, as shown in the top left panel of Fig. 8. As a result, an increase in λ leads to a considerably smaller increase in inflation volatility than in the presence of mark-up shocks. At the same time, the relation between output gap and credit spread volatility is also altered in the absence of mark-up shocks. As can be seen from the top right panel of Fig. 8, the combination of credit spread and output-gap volatility that can be achieved in the absence of mark-up shocks is not achievable in their presence. Finally, the bottom left panel of Fig. 8 shows that the inflation-output gap trade-off is still sufficiently pronounced in the absence of mark-up shocks for extreme hawkishness to lead to significantly larger welfare losses than those obtained for higher values of λ.34

Figure 8.
Figure 8.

The role of mark-up shocks

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note:The top left panel shows the best achievable combinations of the standard deviation of the output gap (x-axis) and the standard deviation of inflation (y-axis) for alternative weights on output gap stabilization (λ). The top right panel shows the best achievable combinations of volatilities in the credit spread (y-axis) and the output gap (x-axis) that can be achieved for alternative weights on output gap stabilization (λ). The bottom left panel shows the welfare losses of a dual mandate relative to Ramsey, expressed in CEV units (%), as a function of policymakers’ preferences over output gap stabilization (λ). The values of λ range from 0 (strict inflation targeting) to 2.4 (extreme dovishness). In all panels, the dashed blue line represents the case in which all shocks are present, while the yellow line represents the case in which all shocks are present except for the mark-up shocks. The black diamond shows the result under the optimal dual mandate in the presence of all shocks. The unfilled black diamond shows the result under the optimal dual mandate when all shocks are present except for the mark-up shocks.

D. Interest rate smoothing

In practice, central banks have proposed financial stability concerns as a motivation for interest rate smoothing.35 From a theoretical point of view, Teranishi (2013) shows that in a model with staggered loan interest rate contracts under monopolistic competition, central banks have an incentive to smooth the policy rate. Also De Fiori and Tristani (2012) find a role for interest rate smoothing, which they, however, report to be small.

We now consider a loss function of the form:

M=π^t2+λx^t2+ρ(Δmrot)2(5)

where mro is the ECB’s short-term policy interest rate. Table 7 reports the results. We find that interest rate smoothing is optimal and produces similar welfare outcomes to an optimal extended mandate with the loan-to-deposit spread. At the same time, welfare outcomes are notably better than under an optimal dual mandate, despite restricting the policy instrument. As can be seen from Fig. 9, inflation and output gap volatility are, as expected, lower under the optimal dual mandate (black diamond) than under the optimal extended mandate with interest rate smoothing (empty black dot). However, spread volatility is considerably lower when interest rates are smoothed optimally. Comparing spread volatility under such a mandate with that under the extended mandate with the loan-to-deposit spread (black dot) shows that smoothing the policy instrument is a reasonable proxy for reducing spread volatility in and of itself. Moreover, both inflation and output gap volatility are comparable under the optimal extended mandates considered, explaining why similar welfare outcomes can be achieved.

Table 7.

The role of interest rate smoothing

article image
Note: This table shows the optimal (welfare-maximising) dual mandate and the optimal extended mandate along with their respective welfare losses relative to the Ramsey policy, expressed in CEV units (%). Two ternary stabilization objectives are considered: the (ex-ante) loan-deposit spread and the change in the (quarterly) policy rate (∆mrot). The volatility of the (quarterly) policy rate under each respective mandate is also reported.
Figure 9.
Figure 9.

Volatility as a function of the weight on interest rate smoothing

Citation: IMF Working Papers 2020, 244; 10.5089/9781513561172.001.A001

Note: The green line shows the volatility of the respective variable under the extended mandate with interest rate smoothing as a function of the weight placed on interest rate smoothing (ρ), while keeping λ at its optimal value under that mandate. The solid black diamond is the optimal dual mandate without interest rate smoothing; the empty black dot is the optimal dual mandate with interest rate smoothing; the solid black dot is the optimal extended mandate with the financial variable being the loan-to-deposit spread. In all cases considered, the measure of resource utilization is the output gap.

VI. Conclusion

We set out to assess the design of a monetary policy mandate in an economy beset by financial frictions, multiple real and nominal distortions, multiple shocks, and an inefficient steady state—precisely those conditions encountered by real-world policymakers. We further limited our attention to simple mandates, of the kind that can be operationalized and effectively communicated in practice. Our study is therefore in the spirit of Debortoli and others (2018), but deals with the additional complications caused by the role that banks play in funding firms.

Our main result is that a simple mandate that includes a stabilization objective for spreads— an observable proxy for the intensity of the underlying financial friction—produces outcomes that are comparable to the Ramsey policy, which produces the best achievable outcomes for social welfare. This finding is robust, and so provides a rationale for those central banks that have identified trade-offs between macroeconomic and financial variables as being relevant for setting monetary policy. This observation is true despite the fact that, in the model we adopt, there are no discrete financial ‘crisis’ events, in which financial frictions produce occasional large-scale economic collapse.

Compared to the existing literature on optimal monetary policy under financial frictions, the role we identify for financial stabilization is quantitatively much more important. In the absence of a ternary financial stabilization objective, we report that when financial frictions are present a dual mandate ‘makes sense’, as in DKLN (2018), because a high weight on stabilizing the output gap implies greater stability in spreads. We further show that smoothing policy interest rates, which double as bank funding costs, serves to raise welfare when financial frictions are present.

Our analysis inevitably omits consideration of other, equally important, concerns that central banks may have about pursuing a financial stabilization objective. We discuss some of these in the Introduction. But of perhaps greatest relevance for our exercise is that in practice, it is not straightforward to capture the many sources of frictions that are present in complex modern financial systems. Even if attention is restricted to banks, the precise form of the frictions—and the policy prescriptions that follow—could differ from those we assume. We see the need for continued research that can establish the robustness of our results to reasonable alternative characterizations of financial frictions, and to realistic treatments of macro-prudential measures used in concert with monetary and fiscal policies to improve macroeco-nomic outcomes.

References

  • Adrian, Tobias, and Hyun Song Shin, 2011, “Financial intermediaries and monetary economics,” in Handbook of Monetary Economics, Vol. 3A, chap. 12, pp. 601650 (Elsevier).

    • Search Google Scholar
    • Export Citation
  • Andrés, Javier, Óscar Arce, and Carlos Thomas, 2013, “Banking Competition, Collateral Constraints, and Optimal Monetary Policy,” Journal of Money, Credit and Banking, Vol. 45, No. s2, pp. 87125.

    • Search Google Scholar
    • Export Citation
  • Angelini, Paolo, Stefano Neri, and Fabio Panetta, 2014, “The interaction between capital requirements and monetary policy,” Journal of Money, Credit and Banking, Vol. 46, No. 6, pp.10731112.

    • Search Google Scholar
    • Export Citation
  • Bean, Charles, 1998, “The new UK monetary arrangements: A view from the literature,” The Economic Journal, Vol. 108, No. 451, pp. 17951809.

    • Search Google Scholar
    • Export Citation
  • Benigno, Pierpaolo, and Michael Woodford, 2012, “Linear-quadratic approximation of optimal policy problems,” Journal of Economic Theory, Vol. 147, No. 1, pp. 142.

    • Search Google Scholar
    • Export Citation
  • Bernanke, Ben S., 2004, “Gradualism,” Remarks at an economics luncheon co-sponsored by the Federal Reserve Bank of San Francisco (Seattle Branch) and the University of Washington, Seattle, Washington.

    • Search Google Scholar
    • Export Citation
  • Boissay, Frédéric, Fabrice Collard, and Frank Smets, 2016, “Booms and banking crises,” Journal of Political Economy, Vol. 124, No. 2, pp. 489538.

    • Search Google Scholar
    • Export Citation
  • Boivin, Jean, and Marc Giannoni, 2006, “DSGE models in a data-rich environment,” NBER Working Paper, Vol. 12772.

  • Carlstrom, Charles T., and Timothy S. Fuerst, 1997, “Agency costs, net worth, and business fluctuations: A computable general equilibrium analysis,” American Economic Review, Vol. 87, No. 5, pp. 893910.

    • Search Google Scholar
    • Export Citation
  • Carlstrom, Charles T, Timothy S Fuerst, and Matthias Paustian, 2010, “Optimal Monetary Policy in a Model with Agency Costs,” Journal of Money, Credit and Banking, Vol. 42, pp. 3770.

    • Search Google Scholar
    • Export Citation
  • Chetty, Raj, Adam Guren, Day Manoli, and Andrea Weber, 2011, “Are Micro and Macro Labor Supply Elasticities Consistent? A Review of Evidence on the Intensive and Extensive Margins,” American Economic Review: Papers & Proceedings, Vol. 101, No. 3, pp. 471475.

    • Search Google Scholar
    • Export Citation
  • Clarida, Richard, Jordi Galí, and Mark Gertler, 1999, “The science of monetary policy: A New Keynesian perspective,” Journal of Economic Literature, Vol. XXXVII, pp. 16611707.

    • Search Google Scholar
    • Export Citation
  • Collard, Fabrice, Harris Dellas, Behzad Diba, and Olivier Loisel, 2017, “Optimal monetary and prudential policies,” American Economic Journal: Macroeconomics, Vol. 9, No. 1, pp. 4087.

    • Search Google Scholar
    • Export Citation
  • Cúrdia, Vasco, and Michael Woodford, 2010, “Credit Spreads and Monetary Policy,” Journal of Money, Credit and Banking, Vol. 42, pp. 335.

    • Search Google Scholar
    • Export Citation
  • Cúrdia, Vasco, and Michael Woodford, 2016, “Credit frictions and optimal monetary policy,” Journal of Monetary Economics, Vol. 84, pp. 3065.

    • Search Google Scholar
    • Export Citation
  • De Fiore, Fiorella, and Oreste Tristani, 2013, “Optimal monetary policy in a model of the credit channel,” Economic Journal, Vol. 123, No. 571, pp. 906931.

    • Search Google Scholar
    • Export Citation
  • De Paoli, Bianca, and Matthias Paustian, 2017, “Coordinating monetary and macroprudential policies,” Journal of Money, Credit and Banking, Vol. 39, No. 2-3, pp. 319349.

    • Search Google Scholar
    • Export Citation
  • Debortoli, Davide, Jinill Kim, Jesper Lindé, and Ricardo Nunes, 2018, “Designing a Simple Loss Function for Central Banks: Does a Dual Mandate Make Sense?The Economic Journal.

    • Search Google Scholar
    • Export Citation
  • Draghi, Mario, 2016, “Delivering a symmetric mandate with asymmetric tools: monetary policy in a context of low interest rates,” Speech, Oesterreichische Nationalbank, Vienna.

    • Search Google Scholar
    • Export Citation
  • Erceg, Christopher J., Dale W. Henderson, and Andrew T. Levin, 2000, “Optimal monetary policy with staggered wage and price contracts,” Journal of Monetary Economics, Vol. 46, No. 2, pp. 281313.

    • Search Google Scholar
    • Export Citation
  • Fagan, Gabriel, Jérôme Henri, and Ricardo Mestre, 2005, “An area-wide model for the euro area,” Economic Modelling, Vol. 22, No. 1, pp. 3959.

    • Search Google Scholar
    • Export Citation
  • Filardo, Andrew, and Phurichai Rungcharoenkitkul, 2016, “A quantitative case for leaning against the wind,” BIS Working Paper 594, Bank for International Settlements.

    • Search Google Scholar
    • Export Citation
  • Fiore, Fiorella De, Pedro Teles, and Oreste Tristani, 2011, “Monetary Policy and the Financing of Firms,” American Economic Journal: Macroeconomics, Vol. 3, No. 4, pp. 112142.

    • Search Google Scholar
    • Export Citation
  • Fiore, Fiorella De, and Oreste Tristani, 2013, “Optimal Monetary Policy in a Model of the Credit Channel,” The Economic Journal, Vol. 123, No. 571, pp. 906931.

    • Search Google Scholar
    • Export Citation
  • Fuhrer, Jeffrey C., 2006, “Intrinsic and Inherited Inflation Persistence,” International Journal of Central Banking, Vol. 2, No. 3.

  • Gambacorta, Leonardo, and Federico M. Signoretti, 2014, “Should monetary policy lean against the wind?: An analysis based on a DSGE model with banking,” Journal of Economic Dynamics and Control, Vol. 43, pp. 146174.

    • Search Google Scholar
    • Export Citation
  • Gelain, Paolo, and Pelin Ilbas, 2017, “Monetary and macroprudential policies in an estimated model with financial intermediation,” Journal of Economic Dynamics and Control, Vol. 78, pp. 164189.

    • Search Google Scholar
    • Export Citation
  • Gelfer, Sacha, 2019, “Data-rich DSGE model forecasts of the great recession and its recovery,” Review of Economic Dynamics, Vol. 32, pp. 1841.

    • Search Google Scholar
    • Export Citation
  • Gertler, Mark, and Peter Karadi, 2011, “A model of unconventional monetary policy,” Journal of Monetary Economics, Vol. 58, No. 1, pp. 1734.

    • Search Google Scholar
    • Export Citation
  • Gertler, Mark, and Nobuhiro Kiyotaki, 2011, “Financial intermediation and credit policy in business cycle analysis,” in Handbook of Monetary Economics, Vol. 3A, chap. 11, pp. 547599 (Elsevier).

    • Search Google Scholar
    • Export Citation
  • Gourio, François, Anil K. Kashyap, and Jae W. Sim, 2018, “The trade offs in leaning against the wind,” IMF Economic Review, Vol. 66, No. 1, pp. 70115.

    • Search Google Scholar
    • Export Citation
  • Hansen, James, 2018, “Optimal monetary policy with capital and a financial accelerator,” Journal of Economic Dynamics and Control, Vol. 92, pp. 84102.

    • Search Google Scholar
    • Export Citation
  • Herbst, Edward P., and Frank Schorfheide, 2016, Bayesian estimation of DSGE models (Princeton University Press).

  • Justiniano, Alejandro, Giorgio E. Primiceri, and Andrea Tambalotti, 2011, “Investment shocks and the relative price of investment,” Review ofEconomic Dynamics, Vol. 14, No. 1, pp. 102121, special issue: Sources of Business Cycles.

    • Search Google Scholar
    • Export Citation
  • Laureys, Lien, and Roland Meeks, 2018, “Monetary and macroprudential policies under rules and discretion,” Economics Letters, Vol. 170, pp. 104108.

    • Search Google Scholar
    • Export Citation
  • Meeks, Roland, Benjamin D. Nelson, and Piergiorgio Alessandri, 2017, “Shadow banks and macroeconomic instability,” Journal of Money, Credit and Banking, Vol. 49, No. 7, pp. 14831516.

    • Search Google Scholar
    • Export Citation
  • Monacelli, Tommaso, 2008, “Optimal monetary policy with collateralized household debt and borrowing constraints,” in John Y. Campbell (ed.), Asset prices and monetary policy, pp. 103146 (NBER).

    • Search Google Scholar
    • Export Citation
  • Schmitt-Grohé, Stephanie, and Martín Uribe, 2007, “Optimal simple and implementable monetary and fiscal rules,” Journal of Monetary Economics, Vol. 54, No. 6, pp. 17021725.

    • Search Google Scholar
    • Export Citation
  • Smets, Frank, 2014, “Financial stability and monetary policy: How closely interlinked?International Journal of Central Banking, Vol. 10, No. 2, pp. 263300.

    • Search Google Scholar
    • Export Citation
  • Smets, Frank, and Rafael Wouters, 2003, “An Estimated Dynamic Stochastic General Equilibrium model of the Euro Area,” Journal of the European Economic Association, Vol. 1, No. 5,pp. 11231175.

    • Search Google Scholar
    • Export Citation
  • Smets, Frank, and Rafael Wouters, 2007, “Shocks and frictions in US business cycles: A Bayesian DSGE approach,” American Economic Review, Vol. 97, No. 3, pp. 586606.

    • Search Google Scholar
    • Export Citation
  • Svensson, Lars E. O., 2003, “What is wrong with Taylor rules? Using judgement in monetary policy through targeting rules,” Journal of Economic Literature, Vol. 41, No. 2, pp. 426477.

    • Search Google Scholar
    • Export Citation
  • Svensson, Lars E. O., 2017, “Cost-benefit analysis of leaning against the wind,” Journal of Monetary Economics, Vol. 90, pp. 193213.

    • Search Google Scholar
    • Export Citation
  • Taylor, John B., 1979, “Estimation and control of a macroeconomic model with rational expectations,” Econometrica, Vol. 47, No. 5, pp. 12671286.

    • Search Google Scholar
    • Export Citation
  • Villa, Stefania, 2016, “Financial Frictions in the Euro Area and the United States: A Bayesian Assessment,” Macroeconomic Dynamics, Vol. 20, No. 05, pp. 13131340.

    • Search Google Scholar
    • Export Citation
  • Woodford, Michael, 2003, Interest and prices : foundations of a theory of monetary policy (Princeton University Press).

Supplementary material for Laureys, Meeks, and Wanengkirtyo: “Optimal simple objectives for monetary policy when banks matter”

Appendix A. Model

This section contains a full description of the model and derivations of its equilibrium conditions.

A.1. Households

There is a continuum of households indexed by j ∈ [0,1]. Each household j chooses consumption Ct(j) and deposits Dt(j), so as to maximize a standard utility function U separable in consumption and hours worked Lt(j):

𝔼tt=0βtψtb{ln[Ct(j)hCt1(j)]χLt(j)1+φ11+φ}

subject to the budget constraint:

Ct(j)+Dt(j)=Wt(j)Lt(j)+Rt1Dt1(j)+Tt(j)+Πt(j)

where Rt-1 is the gross real return from period t - 1 to t, Wt is the real wage, Tt is lump sum taxes, Πt is the net profit from ownership of both firms and banks, and ψtb is an inter-temporal preference shock that follows an AR(1) process: log(ψtb)=pblog(ψt1b)+εb,t.

The first-order conditions result in the standard Euler equation for consumption:

1=𝔼tRtΛt,t+1

where

Uc,tψtb(CthCt1)1βh𝔼tψt+1b(Ct+1hCt)1Λt,t+1βUc,t+1Uc,t

and where the index j is dropped because in equilibrium the households make the same consumption/savings decision.

A.2. Production

There are three types of firms in the economy: intermediate good producers, capital producers, and retailers. Intermediate good producers use capital and labor as input to produce goods that are used as input by retailers. Those retailers in turn produce the final goods. Capital producers use final goods to produce capital. Because of investment adjustment costs, the market price of capital will be endogenously determined.

Intermediate good producers

They operate in a perfectly competitive market and produce goods, which are being sold to retailers, according to the production function:

Yt=At(UtKt1)α(Ltd)1α

where Ut is the utilization rate, Kt-1 is the amount of capital used in production at time t, Ltd is labor input, and At is an aggregate technology shock that follows an AR(1) process: log(At) = ρa log(At-1)+εa,t.

Using an end-of-period stock convention, the timing of events runs as follows: at time t – 1, firms acquire capital Kt-1 for use in production the following period. In order to finance the capital purchases each period, firms obtain funds from banks against perfectly state-contingent securities St-1, at a price of Qt-1 per unit. They face no frictions in obtaining these funds. At the start of time t, shocks are realized. Firms choose the amount of labor input Ltd, and how hard to work their machines (i.e. the utilization rate Ut). After production in period t, they sell back the capital they have used to capital goods firms: undepreciated capital is sold back at the price Qt; depreciated capital is gone.

Conditional on their choice of capital, the firm’s problem at time t is:36

maxLt,UtPm,tPtAt(UtKt1)α(L1d)1αa(Ut)Kt1WtLtd

where Pm,tPt is the price of the intermediate goods expressed in real terms and a(Ut) are the utilization costs of capital expressed in terms of final goods.

The first order conditions are:

Pm,tPtαYtUt=a(Ut)Kt1(6)
Pm,tPt(1α)YtLtd=Wt(7)

We adopt the following functional form for the utilization cost of capital:

a(Ut)=α1α2[exp(α2(Ut1))1]

The adjustment cost function satisfies a(1)=0,a(1)=α1,a(1)=α1α2a(1)a(1)=α2, and α1Rsss(1δ).

Capital producers

Capital producers are owned by households and operate in a perfectly competitive market. They take It units of final goods and transform them into new capital goods according to the technology:

Kt=[1δ]Kt1+ψtxF(It,It1)It(8)

where ψtx is an investment-specific technology shock that follows an AR(1) process: log(ψtx)=ρxlog(ψt1x)+εx,t.

Following CEE (2005) and others, we take the installation function F that maps final goods into capital K to be:

F(It,It1)1S(ItIt1)

where S is an investment adjustment cost function, which takes the form:

S(ItIt1)=κ2(ItIt11)2

The capital producers sell the newly build capital to the intermediate good producers at price Qt . The latter is determined endogenously because of investment adjustment costs. The objective of a capital producer is to choose It such as to maximize the present value of expected profits:

maxIt+k𝔼tk=0Λt,t+k{Qt+kψt+kx[1κ2(It+kIt+k11)2]It+kIt+k}

where Λt,t+kβkUc,t+kUc,t is the stochastic discount factor.

The first-order condition is:

Qtψtx{1S(ItIt1)S(ItIt1)(ItIt1)}+𝔼tΛt,t+1Qt+1ψt+1xS(It+1It)(It+1It)2=1

Taking into account that S(ItIt1)=κ(ItIt11) gives the following expression:

Qtψtx{1κ2(ItIt11)2+κ[1(ItIt1)]ItIt1}𝔼tΛt,t+1Qt+1ψt+1xκ[1(It+1It)](It+1It)2=1(9)

Retail firms

There is a continuum of retailers indexed by r ∈ [0,1] that are owned by households and operate in a monopolistically competitive environment. Each retailer r produces a differentiated good by transforming one unit of intermediate output into one unit of retail output. These differentiated retail goods are packaged by goods aggregators into a composite using the production function:

Ytd=[01Yt(r)ε1εdr]εε1

Profit maximization by the goods aggregators, who operate in a perfectly competitive market, gives rise to the following demand for each type of retail good r:

Yt(r)=(Pt(r)Pt)εYtd

where Pt=[01Pt(r)1εdr]11ε.

Retailers set their price so as to maximize their profit subject to the demand for their good. Prices can only be reset in each period with probability 1 – γ. When they cannot be reset, they are partially indexed by past inflation. This gives rise to the following problem:

maxPt*(r)𝔼tk=0γkΛt,t+k{(Xt+kPt*(r)Pt+kMCt+k)Yt+k(r)}

subject to the demand for their good:

Yt+k(r)=(Πs=1kΠt+s1ιPt*(r)Pt+k)εYt+kd,

where Pt*(r) is the newly set price in period t, and MCt+k=Pm,t+kPt+k is the real marginal cost.

Solving the problem, writing recursively, and introducing a price mark-up shock gives:

Γ1,ι=MCtYtΔtp+γ𝔼tΛt,t+1Πt+1εΠtιεΓ1,r+1(10)
Γ2,ι=YtΔtp+γ𝔼tΛt,t+1Πt+1ε1ΠtιεΓ2,r+1(11)
Πt*=ψtMεε1Γ1.tΓ2.t(12)
1=γΠtε1Πt1ι(1ε)+(1γ)Πt*1ε(13)
Δtp=γ(Πt1ιΠt)εΔt1p+(1γ)(Πt*)ε(14)

The price mark-up shock ψtM follows an ARMA(1,1) process (following Smets and Wouters (2007)) to capture the high-frequency fluctuations in quarterly inflation. Ideally a price markup shock would have been introduced at an earlier stage by making the elasticity of substitution ε stochastic. It is, however, not possible to introduce the shock at that stage and write the model in recursive form unless the shock is iid. Therefore, we follow the standard approach of directly introducing the shock in the first-order condition.

A.3. Labor market and wage setting

The labor market setup is standard and follows EHL (2000). The labor used by the intermediate goods firms is a composite:

Ltd=[01Lt(j)εw1εwdj]εwεw1(15)

Labor aggregators, who operate in a perfectly competitive market, hire the labor supplied by each household j, package it, and sell it to the intermediate goods firms. Profit maximization by the labor aggregators gives rise to the following demand for each type of labor j:

Lt(j)=(Wt(j)Wt)εwLtd

and where Wt=[01Wt(j)1εwdj]11εw.

Each household j sets their wage such as to maximize their utility subject to the demand for their labor. The nominal wage can only be reset in each period with probability 1 – γw. When it cannot be reset, it is partially indexed by past inflation, where ιw [0,1] controls the degree of indexation. This gives rise to the following problem:

maxWt*(j)𝔼tk=0(βγw)k{UC,t+k(j)Xt+kWt*(j)Lt+k(j)ψt+kbχLt+k(j)1+φ1+φ}

subject to

Lt+k(j)=(Xt+kWt*(j)Wt+k)εwLt+kdXt+k={Πs=1kΠt+s1ιwΠt+sifk11ifk=0

All households will set the same wage because of complete markets, and hence j can be dropped. The first-order condition is given by

𝔼tk=0(βγw)k{(1εw)Uc,t+kXt+k(Wt*Wt+k)1εwLt+kd+εwWt*ψt+kbχ(Xt+kWt*Wt+k)εw(1+φ)(Lt+kd)1+φ}=0

which can be rewritten as:

Γ1,twWt*𝔼tk=0(βγw)kUc,t+kXt+k(Wt*Wt+k)1εwLt+kdΓ2,tw𝔼tk=0(βγw)kψt+kbχ(Xt+kWt*Wt+k)εw(1+φ)(Lt+kd)1+φΓ1,tw=εwεw1Γ2,tw

Writing the above expressions in a recursive form and introducing a shock to the wage markup gives:

Γ1,tw=(Wt*)1εwWtεwLtdUc,t+βγw𝔼t(πtιwπt+1Wt*Wt+1*)1εwΓ1,t+1wΓ2,tw=χψtb(Wt*Wt)εw(1+φ)(Ltd)1+φ+βγw𝔼t(πtιwπt+1Wt*Wt+1*)εw(1+φ)Γ2,t+1wΓ1,tw=εwεw1ψtμwΓ2,tw

and where ψtμw is a wage mark-up shock that follows an ARMA(1, 1) process. As for the price mark-up shocks, the wage mark-up shock would ideally have been introduced at an earlier stage by making the elasticity of substitution in equation ((15)) stochastic. It is, however, not possible to introduce the shock at that stage and write the model in recursive form unless the shock is idd. Therefore, we directly introduce the shock to the wage mark-up in the first-order condition.

A.4. Banking system

The banking sector is modelled following Gertler and Karadi (2011). Banks are special in this economy as bank deposits are the sole vehicle for direct household saving, and bank loans are required by intermediate good firms for the purchase of capital. On the asset side of their balance sheets, each bank i holds state-contingent claims on capital employed by firms (’primary securities’, denoted by St (i)) which have mark-to-market value Qt (also the relative price of capital goods). They fund their assets with a deposits obtained from households (Dt (i)), and internal equity (Nt (i), net worth). Their balance sheet identity at the end of period t is: Qt St (i) = Dt(i)+Nt(i).

Over time, the bank accumulates net worth from the spread earned between returns on assets and the risk-free interest paid on deposits. So net worth can be expressed as:

Nt(i)=(Rs,tRt1)Qt1St1(i)+Rt1Nt1(i)(16)

where Rs,t is the gross return on a unit of the bank’s assets from period t – 1 to t, given by the return on capital:

Rs,t=PmtPtα(Yt/Kt1)+Qt(1δ)a(Ut)Qt1(17)

Equation (16) shows that any growth in net worth above the risk-free return, depends on the spread over it that the bank earns on his assets, as well as on the total value of assets.

For the intermediary to be profitable to operate in any period, the following must hold:

𝔼tΛt,t+1(Rs,t+1Rt)0(18)

The bank will not fund assets with a discounted rate of return less than the borrowing cost. Note that without frictions in the banking sector the above expression holds with equality. With frictions, the spreads may be positive because there are limits to the bank’s ability to acquire funds.

Banks are ultimately owned by households and run by household members known as ‘bankers’. When they start a bank, bankers receive a transfer of resources from their ‘home’ household in proportion ξ to existing bank assets, which forms their initial inside equity stake. Bankers are replaced by new management with probability (1-σ) each quarter to avoid that over time they build up sufficient net worth to fund all investment. Upon exiting, bankers transfer their accumulated funds back to the home household. Therefore, the bank’s objective is to choose the structure of its balance sheet so as to maximize the expected present value of future profits if remaining in business:

Vt(i)=max𝔼tk=0(1σ)σkΛt,t+1k[Nt+1+k(i)]

But in choosing the structure of its balance sheet, the bank is constrained by the behavior of depositors.37 They place limits on the quantity of deposit funding they are willing to extend because they are aware that bankers can take a hidden action to divert resources for their own benefit, an action which will result in the bank going out of business. The extent of the private benefits bankers can enjoy is proportional to the overall size of their balance sheet. Incentive compatibility on the part of bankers requires that the ‘going concern’ value of the bank (V )—the expected present value of future profits if remaining in business—exceeds the ‘gone concern’ or liquidation value of the bank:

Vt(i)θtQtSt(i)

where θt is the (stochastic) fraction of funds that can be ‘diverted’ by the banker, and follows an AR(1) process: log(θt) = ρθ log(θt−1)+εθ,t.

As shown below, when the incentive constraint is binding, the bank’s assets are constrained by its net worth.

Taking into account that bankers are replaced by new management with probability (1 — σ) each quarter, the ‘going concern’ value of the bank at the end of period t, can be written recursively as follows:

Vt(i)=max𝔼tΛt,t+1[(1σ)Nt+1(i)+σVt+1(i)](19)

Guess the following linear solution:

Vt(i)=(νs,tQtνt)μtQtSt(i)+νtNt(i)(20)

Maximising expression ((20)) subject to the incentive constraint (expression (A.4)) and defining λt as the Lagrange multiplier on that constraint, gives the following first-order conditions:

λt=μtθtμt(21)
QtSt(i)=vtθlμtNt(i)(22)

Combining expressions ((20)), ((21)), and ((22)), gives Vt (i) = νt (1 + λt )Nt (i). Equation ((19)) can now be rewritten as:

μtQtSt(i)+vtNt(i)=𝔼tΛt,t+1[(1σ)Nt+1(i)+σ(vt+1(1+λt+1)Nt+1(i))](23)

Defining:

Ωt(1σ)+σ(vt(1+λt))(24)

and where Ωt can be interpreted as the shadow value of a unit of net worth, one can solve for the expressions µt and νt

μt=𝔼tΛt,t+1Ωt+1(Rs,t+1Rt)(25)
vt=𝔼tΛt,t+1Ωt+1Rt(26)

Note that when taking into account that μt=vs,tQtvt, equations ((22)), ((24)), and ((26)) respectively, can be rewritten as:

(vs,tQtθt)Nt(i)=(θtμt)Dt(i)(27)
vs,tQt=𝔼tΛt,t+1Ωt+1Rs,t+1(28)
Ωt=(1σ)+σ(vs,tQt(1+λt)θtλt)(29)

A.5. Market clearing conditions and aggregation

After aggregating over continuing and entering bankers, banking system net worth can be shown to evolve as:

Nt=(σ+ξ)Qt1St1Rs.tσRt1Dt1(30)

Market clearing implies that

QtKt=QtStYtd=Ct+It+Gt+a(Ut)Kt1Ytd=YtΔtpLtd=LtΔtw

Appendix B. Complete Set of Equilibrium Conditions

B.1. Main equations

Definitions:

Uc,tψtb(CthCt1)1βh𝔼tψt+1b(Ct+1hCt)1(31)
Λt1,tβUc,tUc,t1(32)

Banking system:

λt=μtθtμt(33)
(vtsQtθt)Nt=(θtμt)Dt(34)
μt=𝔼tΛt,t+1Ωt+1(Rs,t+1Rt)(35)
vtsQt=𝔼tΛt,t+1Ωt+1Rs,t+1(36)
Ωt=(1σ)+σ(vtsQt(1+k)θtλi)(37)
Nt=(σ+ξ)Qt1St1Rs,tσRt1Dt1(38)
Dt=QtStNt(39)

Households and production:

1=𝔼tΛt,t+1Rt(40)
Yt=At(UtKt1)α(Ltd)1α(41)
α1exp(α2(Ut1))Ut=MCtαYtKt1(42)
Wt=MCt(1α)YtLtd(43)
Rs,t=MCtα(Yt/Kt1)+Qt(1δ)a(Ut)Qt1(44)
Qtψtx[1κ2(ItIt11)2]=1+Qtψtxκ(ItIt11)(ItIt1)𝔼tΛt,t+1Qt+1ψt+1xκ(It+1It1)(It+1It)2(45)

Nominal frictions:

Γ1,t=MCtYtΔtp+γ𝔼tΛt,t+1Πt+1εΠtιεΓ1,t+1(46)
Γ2,t=MCtYtΔtp+γ𝔼tΛt,t+1Πt+1εΠtι(1ε)Γ2,t+1(47)
Πt*=ψtMεε1Γ1,tΓ2,t(48)
1=γΠtε1Πt1ι(1ε)+(1γ)Πt*1ε(49)
Δtp=γ(Πt1ιΠt)εΔt1p+(1γ)(Πt*)ε(50)

Wage frictions:

Γ1,tw=(Wt*)1εwWtεwLtdUc,t+βγw𝔼t(πtιwπt+1Wt*Wt+1*)1εwΓ1,t+1w(51)
Γ2,tw=χψtb(Wt*Wt)εw(1+φ)(Ltd)1+φ+βγw𝔼t(πtιwπt+1Wt*Wt+1*)εw(1+φ)Γ2,t+1w(52)
Γ1,tw=εwεw1ψtwΓ2,tw(53)
Wt1εw=γw(πt1ιwπtWt1)1εw+(1γw)(Wt*)1εw(54)
Δtw=γw(Πt1ιwΠtWt1Wt)εwΔt1w+(1γw)(Wt*Wt)εw(55)
Ltd=Lt/Δtw(56)

Fisher relation:

Ft=Rt𝔼tΠt+1(57)

Market clearing:

Kt=[1δ]Kt1+ψtx[1κ2(ItIt11)2]It(58)
YtΔtp=Ct+It+Gt+α1α2[exp(α2(Ut1))1]Kt1(59)
Kt=St(60)

and a specification for monetary policy (only for estimation):

Ft=Ft1ρr[πtϕπ(Yt/Yte)ϕy(Yt/YteYt1/Yt1e)(ϕΔy]1ρr(61)

B.2. The potential allocation

The complete set of equations for the efficient allocation may be written:

Uc,te=ψtb(CtehCt1e)1βh𝔼tψt+1b(Ct+1ehCte)11=Rte𝔼tβUc,t+1eUc,teRte𝔼tΛt,t+1Yte=At(UteKt1e)α(Lte)1αα1exp(α2(Ute1))Ute=αYteKt1eWte=(1α)YteLteψtbχ(Lte)φ=Uc,teWteRs,te=α(Yte/Kt1e)+Qte(1δ)a(Ute)Qt1e
Qteψtx[1κ2(IteIt1e1)2]=1+Qteψtxκ(IteIt1e1)(IteIt1e)𝔼tΛt,t+1eQt+1eψt+1xκ(It+1eIte1)(It+1eIte)2Kte=[1δ]Kt1e+ψtx[1κ2(IteIt1e1)2]IteYte=Cte+Ite+Gt+α1α2[exp(α2(Ute1))1]Kt1e𝔼tRs,t+1e=Rte

Appendix C. Steady State

C.1. The distorted economy

We may compute the steady state by working backwards from assumptions about certain observable quantities:

R equal in steady state to 1/β.

Rs -R the steady state loan-deposit spread (e.g. 1.20 percent).

Rs the steady state loan rate, equal to [Rs/100+(1/β)4]14.

S/N steady state leverage of commercial banks (e.g. 4.5x).

L steady state share of hours worked (e.g. 1/3).

M steady state mark-up, equal to ε/(ε -1).

Π steady state price inflation, assumed to be unity.

Π steady state reset price inflation, assumed to be unity.

U steady state utilization, normalized to be unity.

These assumptions give us quantities accounting for the steady states of the following seven variables:

R,Rs,L,Π,Π*,Δp,Δw

We can also see, from capital goods producers’ profit maximization condition, that:

Q=1

From these, we may derive steady state values of banking system quantities:

λ=SNRsRRθ=(1σ)(1βRs)σλ(1σ)[1+λλ]ξ=(1σR)σ(RsR)(S/N)Rs(S/N)Ω=θβ(RsR)λ1+λ

The real side of the economy can be pinned down as follows. From the definition of the return on capital,

Z=Rs(1δ)

From there we can use that Z = (1/M)α(Y/K) and the production function to find the labor-capital and output-capital ratios:

(LK)=(1Zα)1/(1α)YK=Zα

which since we know L leads to capital, output and derived quantities:

K=L(YK)1/(α1)S=KN=((σ+ξ)Rsσ(1/β)1σ(1/β)D=KNY=KαL1αW=1(1α)YLI=δKC=Y(1gy)δKG=gyYUc=1βh(1h)C

In the nominal block:

mc=1Γ1=mcY1γβΓ2=Y1γβF=R

In the wage block:

Γ1w=WLUc1βγwΓ2w=χL1+φ1βγwW*=WLd=L

The labor supply scale parameter in the utility function is then:

χ=WUcLφw

Appendix D. The Linearized Model

The decentralized allocation:38

(1βh)(1h)u^c,t=(1+βh2)c^t+hc^t1+βh𝔼t{c^t+1}+(1h)(ψ^tbβh𝔼tψ^t+1b)(8)
Λ^t=u^c,tu^c,t1(9)
λ^t=θθμ¯(μ^tθ^t)(10)
vs¯(vs¯θ¯)(v^tsq^t)(θ¯(vs¯θ¯)+θ¯(θ¯μ¯))θ^t+n^t=dt^μ¯(θ¯μ¯)μ^t(11)
μ^t=𝔼t{Λ^t+1+Ω^t+1+(R¯sr^s,t+1R¯r^t)/(R¯sR¯)}(12)
v^sq^t=𝔼t{Λ^t+1+Ω^t+1+r^s,t+1}(13)
Ω¯Ω^t=σvs¯(1+λ¯)(v^stq^t)+σλ¯(vs¯θ¯)λ^tσλ¯θ¯θ^t(14)
n^t=(σ+ξ)(S¯/N¯)R¯s(q^t1+s^t1+r^s,t)σ(D¯/N¯)R¯(r^t1+d^t1)(15)
D¯d^t=S¯(q^t+s^t)N¯n^t(16)
𝔼t{Λ^t+1}=r^t(17)
y^t=a^t+α(u^t+k^t1)+(1α)lt^(18)
α2u^t=m^ct+y^tk^t1u^t(19)
w^t=m^ct+y^tlt^(20)
R¯s(q^t1+r^s,t)=MC¯αY¯K¯(m^ct+y^tk^t1)+(1δ)q^t(21)
q^t+ψ^tx=κ(i^ti^t1)βκ(i^t+1i^t)(22)
Γ^1,t=(1γβ)(m^ct+y^t)+γβ𝔼t{Λ^t+1+επ^t+1ιεπ^t+Γ^1,t+1}(23)
Γ^2,t=(1γβ)y^t+γβ𝔼t{Λ^t+1+(ε1)π^t+1ι(ε1)π^t+Γ^2,t+1}(24)
π^t*=ψ^tM+Γ^1,tΓ^2,t(25)
γπ^tγιπ^t1=(1γ)π^t*(26)
Γ^1,tw=(1γwβ)[(1εw)w^t*+εww^t+l^t+u^c,t]+γwβ𝔼t{(1εw)(ιwπ^tπ^t+1+w^t*w^t+1*)+Γ^1,t+1w}(27)
Γ^2,tw=(1γwβ)[ψ^tbεw(1+φ)(w^t*w^t)+(1+φ)l^]+γwβ𝔼t{εw(1+φ)(ιwπ^tπ^t+1+w^t*w^t+1*)+Γ^2,t+1w}(28)
Γ^1,tw=ψ^tw+Γ^2,tw(29)
w^t=γw(Iwπ^t1π^t+w^t1)+(1γw)w^t*(30)
k^t=(1δ)k^t1+δ[i^t+ψ^tx](31)
y^t=(C¯/Y¯)c^t+(I¯/Y¯)i^t+(G¯/Y¯)g^t(32)
k^t=s^t(33)
ft^=r^t+𝔼t{π^t+1}(34)

To close the model, specify monetary policy.

Appendix E. Estimation

We plot the observables used for the estimation in Figure E.1. Additionally, we also plot the one-step ahead forecasts, as implied by the model under the posterior mode calibration used in the optimal policy exercises. Effectively, these are the fitted values of the model. The model matches the real observables fairly well. Note that this exercise would almost always lead to a ‘poor’ fit of the variables with a data-rich strategy. As we use measurement errors to allow data-rich estimation, but following the literature, these shocks are set to be i.i.d. However, in reality, the wedge between the data-rich observables (for example, between GDP deflator and CPI inflation) tends to be fairly persistent. A one-step ahead forecast would not contain this persistence. Thus, almost by construction, the data-rich observables like inflation and spreads do not have a fit as close as the other variables.

Figure E.1.