Monetary and Macroprudential Policy with Endogenous Risk
Author:
Mr. Tobias Adrian
Search for other papers by Mr. Tobias Adrian in
Current site
Google Scholar
Close
https://orcid.org/0000-0001-9379-9592
,
Fernando Duarte
Search for other papers by Fernando Duarte in
Current site
Google Scholar
Close
,
Nellie Liang 0000000404811396 https://isni.org/isni/0000000404811396 International Monetary Fund

Search for other papers by Nellie Liang in
Current site
Google Scholar
Close
, and
Pawel Zabczyk 0000000404811396 https://isni.org/isni/0000000404811396 International Monetary Fund

Search for other papers by Pawel Zabczyk in
Current site
Google Scholar
Close

Contributor Notes

We extend the New Keynesian (NK) model to include endogenous risk. Lower interest rates not only shift consumption intertemporally but also conditional output risk via their impact on risk-taking, giving rise to a vulnerability channel of monetary policy. The model fits the conditional output gap distribution and can account for medium-term increases in downside risks when financial conditions are loose. The policy prescriptions are very different from those in the standard NK model: monetary policy that focuses purely on inflation and output-gap stabilization can lead to instability. Macroprudential measures can mitigate the intertemporal risk-return tradeoff created by the vulnerability channel.

Abstract

We extend the New Keynesian (NK) model to include endogenous risk. Lower interest rates not only shift consumption intertemporally but also conditional output risk via their impact on risk-taking, giving rise to a vulnerability channel of monetary policy. The model fits the conditional output gap distribution and can account for medium-term increases in downside risks when financial conditions are loose. The policy prescriptions are very different from those in the standard NK model: monetary policy that focuses purely on inflation and output-gap stabilization can lead to instability. Macroprudential measures can mitigate the intertemporal risk-return tradeoff created by the vulnerability channel.

1. Introduction

The price of risk plays a central role in the macroeconomy. It is determined by an interplay of preferences, expectations formation, and institutional characteristics (Woodford, 2019; Bordalo, Gennaioli, Shleifer, and Terry, 2019; He and Krishnamurthy, 2013; Brunnermeier and Sannikov, 2014). The price of risk has been shown to forecast future growth via credit spreads (Gilchrist and Zakrajˇsek, 2012), recessions via the term spread (Estrella and Mishkin, 1998), and crises via funding market spreads (Bernanke, 2018). Financial conditions shape the conditional distribution of real activity (Adrian, Boyarchenko, and Giannone, 2019) and help forecast the conditional mean and volatility of output and output gap growth many periods into the future (Adrian, Grinberg, Liang, and Malik, 2018; Adrian and Duarte, 2018). At the same time, the price of risk is a first-order determinant of financial conditions. The empirical linkages between the price of risk, financial conditions, and real outcomes suggest practical benefits from associating the price of risk with financial conditions within a parsimonious macroeconomic framework.

In this paper we show how the three-equation New Keynesian (NK) model can be modified to that effect, and we apply the resulting framework to analyze monetary and macroprudential policies. We use the NK model as our starting point because it continues to be the most common workhorse model for many macroeconomic debates: either directly in its three equation form, or because these key relationships are at the core of more quantitatively oriented setups. We modify the NK model in two ways. First, we introduce an endogenous “financial conditions” wedge in the IS equation. The dynamic response of this wedge to exogenous shocks depends on past financial conditions, current output, and expected output. Second, we introduce “financial vulnerabilities” by allowing the second moment of output to depend on state variables, including financial conditions. Financial vulnerabilities produce endogenous time-variation in second moments and generate, in certain states, a much higher sensitivity of macroeconomic variables to shocks than in the NK model. Together, the two new elements affect the transmission and amplification of financial shocks and shape the entire probability distribution of output. We label the model NKV for “New Keynesian Vulnerability”.

We show that the NKV model, when appropriately calibrated, replicates not only the same basic moments that the NK model does, but also the key empirical regularities that link financial and macroeconomic variables uncovered in the recent literature by Adrian, Boyarchenko, and Giannone (2019); Adrian, Grinberg, Liang, and Malik (2018); Adrian and Duarte (2018): (i) Loose financial conditions today predict a higher mean and lower volatility of the one- to four-quarter-ahead conditional output distribution, but lower mean and higher volatility at longer horizons (and vice-versa for tight financial conditions); (ii) Financial conditions predict the term structure of growth-at-risk (the low quantiles of the future conditional output distribution at different predictive horizons), even when controlling for macroeconomic variables; (iii) The term-structures of growth-at-risk cross: looser initial financial conditions lead to downside risks to output that are lower over the following two years, but higher over the following three to five years, than when initial financial conditions are tight; (iv) the volatility paradox (Brunnermeier and Sannikov, 2014): a compressed price of risk predicts low volatility of output in the short run, but increased volatility in the medium term; (v) Financial conditions are not good predictors of the conditional distribution for inflation. Statements (i)-(iv) also hold when output is replaced by the output gap, which implies that these empirical regularities are mostly cyclical phenomena.

The NKV model is a small-scale dynamic, stochastic, general equilibrium (DSGE) model that has the same number of shocks as the NK model, one more equation, and only five more parameters. Just as the three-equation NK model, it is intended more as an illustration of the economic forces at work than a state-of-the-art quantitative framework for forecasting and policy analysis. Nonetheless, the NKV model delivers (i)-(iv) not only qualitatively, but also quantitatively. More generally, we highlight dimensions along which our setup does considerably better than any linearized model and we show that it can also outperform non-linear, quantitative models recently used for policy analysis.

One of our main findings is that instrument rules that provide a good approximation to optimal policy in the NK model lead to real and financial instability in the NKV model, once financial conditions are accounted for.1 This leads us to argue that relevant policy models need to explicitly account for real-financial interactions, and to propose a model which does so in a parsimonious fashion.

Existing alternatives to the three equation NK model fall into one of three main categories. The first comprises econometric models featuring financial conditions, such as the quantile regression based estimates proposed in Adrian, Boyarchenko, and Giannone (2019). The second consists of DSGE models in which spreads / risk premia are exogenous, either small-scale, such as Sims and Wu (2019), or more quantitatively oriented like Smets and Wouters (2007). The third includes medium and large scale DSGE models, in which spreads / financial conditions are endogenous (Bernanke, Gertler, and Gilchrist, 1999; Gertler and Karadi, 2011; Christiano, Motto, and Rostagno, 2014).

Because of the Lucas (1976) critique, the first two groups of models, while undeniably useful and informative, are not particularly well-suited for policy purposes. In the first case, much like in the context of Lucas’s original contribution, changes in the conduct of policy would be expected to affect the laws of motion of financial conditions, but the estimates provide little guidance as to how. In the second case, by construction, policy changes leave the dynamics of spreads and risk premia unaffected, i.e., in that sense, there are no real-financial interactions. Models in the third group tend to be considerably more complicated, and, as we show, have trouble accounting for the stylized facts (i)-(iv) listed above, even when the underlying non-linearities are allowed to play a role.2 Furthermore, the interplay of the non-linearities tends to make the intuitive interpretation of results more challenging, which is perhaps one reason why none of those setups has been widely accepted as a “new synthesis”.

One of our contributions is to show that empirical performance along the “risk” dimension can be improved, while considerably decreasing the complexity of the resulting specification. In addition, the inclusion of financial conditions considerably enriches the propagation mechanism, which the real business cycle (RBC) model, and the NK model built on it, have been criticized for lacking.3 Broadly, the intuition underlying our results is simple: risk can endogenously build up in “good times”, with the magnitude of the resulting financial vulnerability only revealed during a subsequent downcycle.

The two key aspects of our narrative are an amplification mechanism that captures endogenous fluctuations in risk, and a persistent propagation of shocks, with financial conditions materially overshooting their long-run values for several years. Much like with the Calvo (1983) and Rotemberg (1982) pricing assumptions, which yield identical dynamics in the standard reduced-form NK model, the underlying amplification and propagation mechanisms of the NKV model are consistent with a variety of microfoundations. We show that the amplification we propose could arise either through endogenous heteroskedasticity of demand shocks, or because of occasionally binding Value-at-Risk (VaR) constraints of intermediaries (as in Adrian and Duarte, 2018). Similarly, we show that the persistence in propagation can be microfounded by either using diagnostic expectation formation, in line with Bordalo, Gennaioli, and Shleifer (2018), or introducing slow moving changes in leverage arising from a standard financial accelerator mechanism (Bernanke, Gertler, and Gilchrist, 1999). The fact that the NKV can be microfounded with explicit preferences, expectations, and so on, addresses the Lucas (1976) critique, a crucial step for credible policy analysis.4

How should monetary policy and macroprudential policy be conducted? First, monetary policy that aims at fully stabilizing the volatility of the output gap and inflation leads to explosive financial conditions. Second, a suitable combination of macroprudential and monetary policies can ensure efficiency if macroprudential policy is sufficiently potent to independently stabilize financial conditions. Third, when macroprudential tools are not perfectly effective – as is arguably the case in the real world – a Taylor rule augmented to include expected financial conditions increases welfare relative to a standard Taylor rule, effectively eliminating states of high vulnerability.

The remainder of the paper is organized as follows: Section 2 provides an overview of related literature, helping put our findings in context. Section 3 describes the model and studies its theoretical properties. Section 4 discusses how we solve, estimate and validate the model. We then study the interplay of monetary and macroprudential policies in Section 5, characterizing conditions under which the the separation principle provides a good guide to policy, and highlighting circumstances in which aggressive monetary policy can lead to financial and real instability. Section 6 uses the model to study alternative policy paths, further clarifying how the conduct of policy endogenously affects the dynamics of financial conditions, and characterizing second-best policies, which help minimize vulnerability when macroprudential tools are unavailable. Section 7 concludes.

2. Related Literature

Our paper is related to research that positions the financial sector at the heart of macroe-conomic fluctuations and the transmission mechanism. Woodford (2010), for example, incorporates credit conditions by augmenting a Keynesian IS-LM model with financial intermediary frictions, based on Curdia and Woodford (2010). In that setting, the additional friction gives rise to an extra state variable that can be mapped into credit spreads, and optimal policy is shown to explicitly depend on credit supply conditions, broadly in line with our findings.5 Financial frictions can arise when lenders face asymmetric information, in which case financial conditions have the propensity to improve the net worth of borrowers, and, through a financial accelerator effect, increase credit for households and businesses. Macrofinancial linkages can also arise because financial intermediaries respond endogenously to looser financial conditions, with institutional constraints providing further amplification.6 As discussed, the specification of the NKV is consistent with several of these narratives, with low rates and a low price of risk boosting current growth while simultaneously making the economy more vulnerable to future shocks and future financial instability.7

In the behavioral literature, diagnostic expectations of investors can give rise to extrapolative forecasting heuristics and lead to the neglect of tail risk when recent news has been good, generating predictable dynamics of credit spreads (Bordalo, Gennaioli, and Shleifer, 2018). Extrapolative beliefs in the stock market can amplify technology shocks, giving rise to booms and busts in stock prices and the real economy, with deviations from rational expectations potentially playing a more powerful role during times of low interest rates (Adam and Merkel, 2019). Extrapolative beliefs in credit markets can also create a feedback loop, because investors will refinance maturing debt on more favorable terms when defaults have been low, reducing risks in the short run, even if underlying cash flow fundamentals are weakening (Greenwood, Hanson, and Jin, 2019). As discussed, our specification of the process governing the dynamics of financial conditions is closely related to some of these behavioral theories of expectation formation and we manage to incorporate these without sacrificing tractability, allowing our model to be readily applied to study different policy questions.

Our paper is also related to those studying how monetary and macroprudential policy could reduce risks to financial stability. In particular, we revisit the separation principle in which monetary policy should focus on price stability and real activity, while macroprudential policies should be directed to reduce vulnerabilities consistent with an acceptable level of financial stability risk.8 Svensson (2017) estimates the costs and benefits of using monetary policy to prevent a severe recession in a model where the costs are related to higher unemployment, while the benefits are associated with a lower probability of a future recession on account of reduced household borrowing. In that model, the costs of using monetary policy to reduce household credit are much higher than the benefits because tighter policy lowers the probability of a severe recession by only a small amount and does not markedly reduce its severity.

Our model instead accounts for the fact that risk is endogenous to monetary policy, and that monetary policy that ignores financial vulnerabilities will lead to booms and busts with greater amplitude. In a similar vein, Filardo and Rungcharoenkitkul (2016) incorporate a financial cycle in which booms and busts are recurring, and show that in their setup monetary policy can constrain the accumulation of imbalances and significantly lessen the duration and costs associated with crises.9 Arguably, however, our approach is both more parsimonious and more easily portable to larger models, including those typically used in central banks for policy purposes.

3. The NKV Model

3.1. A risk-augmented IS curve. We consider an economy with a large number of identical households seeking to maximize the expected value of their utility,

E 0 { t = 0 β t ξ t ( C t 1 σ 1 σ N t 1 + φ 1 + φ ) } ,

where β ∈ (0,1) is the discount factor, σ > 0 is the inverse of the elasticity of intertemporal substitution, and ϕ > 0 is the inverse of the Frisch elasticity of labor supply. Period utility depends on the level of consumption Ct, hours worked Nt, and a random variable ξt > 0. The variable ξt will act as a demand shifter and can be interpreted in several ways. It can represent a preference shock or a change of measure (i.e., a belief shock). We will consider both exogenous and endogenous versions of ξt; what is necessary for our derivations to be correct is that households take it as given. Financial markets are complete. Each household faces the flow budget constraint

P t C t + D t F t 1 + W t N t + T t ( 1 )

where Pt is the price level, Dt is the nominal value of the households’s end-of-period portfolio of financial assets, Ft-1 is the beginning-of-period total financial wealth of the household, Wt is the nominal wage, and Tt is the lump sum component of income (that can include government transfers and dividends from firms). We note that households are not restricted to holding only riskless bonds as the variable Dt refers to the value of a portfolio of any number of state-contingent assets (including a riskless bond, or financial instruments allowing it to be replicated).

The absence of arbitrage implies that there exist a stochastic discount factor Qt that prices all assets. For example, the household’s financial wealth satisfies Dt = Et[Qt+1+Ft+1]. In equilibrium, the optimality conditions of the representative household implies that

Q t + 1 = β ξ t + 1 ξ t ( C t + 1 C t ) σ 1 Π t + 1 .

where Πt+1 = Pt+1/Pt is gross inflation between t and t + 1.

The one-period risk-free nominal interest rate faced by the household is itint1/Et[Qt+1]1, which is known at time t. It differs from the interest rate ĩt set by the central bank according to

i t i n t i ˜ t + s p r ˜ t ( 2 )

where spr˜t an endogenous spread linked to the current state of economic and financial conditions. The first-order optimality condition of the representative household —the Euler equation— for its holdings of the riskless nominal bond is

1 1 + i ˜ t + s p r ˜ t = E t [ β ξ t + 1 ξ t ( C t + 1 C t ) σ 1 Π t + 1 ] . ( 3 )

Appendix B shows that equation 3 can be approximated by

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) 1 σ s p r t 1 σ E t [ g t + 1 ] . ( 4 )

In equation 4, ytgap is the output gap, it is the deviation of the central bank nominal rate ĩt from its steady-state value, πt is inflation, sprt is the deviation of the spread spr˜t from its steady state value, and gt+1 is the deviation of log (ξt+1t) from its steady state value. Equation 4 is a standard log-linear first-order Taylor approximation, with the exception that we have preserved some of the potential non-linearities in the last term, 1σEt[gt+1], by not approximating it as linear function of state or other primitive variables. If, for example, Et [gt+1] is a non-linear function of, say, the output gap, then 4 is non-linear. However, if ξt is exogenous, then Et [gt+1] is also exogenous and equation 4 becomes a true linear approximation.

We decompose Et [gt+1] into two multiplicative terms

E t [ g t + 1 ] = V ( X t 1 ) ε t y g a p

where εtygap is a random variable with zero conditional mean and V(Xt-1) is a function of time t — 1 state variables denoted by Xt-1.10 Using this decomposition in equation 5, we arrive at the “risk-adjusted” IS curve

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) 1 σ s p r t 1 σ V ( X t 1 ) ε t y g a p . ( 5 )

There are two new elements in equation 5 relative to the standard textbook version of the IS curve. First, there is a “financial conditions term” 1σsprt that arises because there is a spread between the interest rate itint that households can access and the interest rate ĩt set by the central bank, as captured in equation 2.

The second element is the time varying stochastic volatility V(Xt-1) in the disturbance term 1σV(Xt1)εtygap. It arises from the demand shifter ξt in the representative household’s utility. The demand shifter has stochastic volatility V(Xt-1) without loss of generality. We assume that the exogenous disturbance ξt is such that εtygap is an i.i.a. N (0,1) stochastic process. The decomposition of the term is without loss of generality.

Intuitively, given the assumed positive relationship between spreads and financial conditions, higher spreads and tighter financial conditions are associated with lower contemporaneous values of the output gap and, ceteris paribus, push down on demand and current activity. In the next section, we propose two alternative microfoundations for how the spread sprt is determined. The first is rooted in the diagnostic expectations approach, which nests rational expectations as a special case (Bordalo, Gennaioli, and Shleifer, 2018). The second uses the financial accelerator model of Bernanke, Gertler, and Gilchrist (1999).

There is another way of deriving Equation 5 that is directly motivated by financial intermediation frictions. In particular, Adrian and Duarte (2018) focus on the role of occasionally binding Value-at-Risk (VaR) constraints of financial intermediaries in a general equilibrium model and arrive at an equivalent specification of the dynamic IS curve. We refer the reader to Adrian and Duarte (2018) for full details of the derivation (see, in particular, Equations 6 and 7 on p.8), but stress that our proposed IS curve extension is compatible with at least two alternative microfoundations.

3.2. Microfoundations. We now offer microfoundations for the dynamics of the sprt wedge and the vulnerability function V(Xt-1) appearing in equation 5, which hold irrespective of the preferred method of deriving that aggregate relationship. In the former case, there too will be several alternative ways of arriving at the specification endogenously linking financial and real conditions. The one we focus on first starts by assuming that households form beliefs according to diagnostic expectations (Bordalo, Gennaioli, and Shleifer (2018); Bordalo, Gennaioli, Shleifer, and Terry (2019) and Greenwood, Hanson, and Jin (2019)).

Diagnostic expectations refer to a belief formation mechanism based on Kahneman and Tversky’s representativeness heuristic. The expectations formation process is forward looking and depends on the underlying shocks and structure of the economy, making it immune to the Lucas critique.11 Intuitively, diagnostic expectations capture over-reaction to news and deliver extrapolation and neglect of risk in a unified framework (see also Bordalo, Gennaioli, and Shleifer, 2018, for additional motivation as well as careful testing of the underlying hypothesis). The severity of judging by “representativeness” rather than by the true probability distribution is indexed by a single variable θ ≥ 0, where θ = 0 recovers rational expectations as a special case. Bordalo, Gennaioli, and Shleifer (2018) show that diagnostic expectations, when combined with a simple contracting framework between firms and households, can rationalize several empirical regularities about the connection of the entire distribution of asset prices and economic outcomes to expectations formation. Bordalo, Gennaioli, Porta, and Shleifer (2019) show that diagnostic expectations can also account for the empirical shape of the relation between the expectation of analysts and the actual performance of firms, as well as for the relation between analysts’ expectations and the predictability of stock returns.

Under the diagnostic expectations microfoundation, the random variable ξt that enters the household utility is a change of measure that converts true probabilities into diagnostic ones. In other words, diagnostic expectations make the agents behave as if the probability density that households use to compute expectations is distorted. The distortion is endogenously determined and depends on expectations of future equilibrium variables, but households take it as given (they cannot choose their beliefs). In equation 5, the distortion induced by diagnostic expectations relative to rational expectations is 1σV(Xt1)εtygap. We show in Appendix C that if the fundamentals of the economy, denoted by εt, follow an AR(1) process

ϵ t + 1 = b ϵ t + υ t + 1 ( 6 )

where 0 < b < 1 and vt is a sequence of i.i.d. normal random variables, then the shock in the risk-adjusted IS curve, ϵtygap, is a constant multiple of vt, and hence also i.i.d. normal. More importantly, we show in the appendix that V(Xt-1) maps exactly to θ, the severity of judging by “representativeness”. We will later pick the shape of the function V (·) to determine how state variables influence the degree of distortion in beliefs induced by diagnostic expectations. Similar to Bordalo, Gennaioli, Kwon, and Shleifer (2020), we will obtain time-varying amplification (or “excess volatility”) arising from diagnostic expectations.

As alluded to above, the spread sprt can also be microfounded using diagnostic expectations. In the baseline NK model, there are no financial frictions. Households can finance firms with debt, equity, or a combination of both. Since the Modigliani-Miller theorem holds in our setup, how firms are financed is irrelevant. Bordalo, Gennaioli, and Shleifer (2018) show that under diagnostic expectations, a simple real contracting model in which investment depends on the perceived probability with which each firm repays its debt in the following period gives the following ARMA(1,1) equilibrium law of motion for spreads12

S P R t = ( 1 b ) σ 0 + b S P R t 1 ( 1 + θ ) b σ t ϵ t + θ b 2 σ 1 ϵ t 1 .

The coefficients σ0 and σ1 denote the average level of spreads and their sensitivity to expectations of future fundamentals, respectively. The distortions of diagnostic expectations affect the perceived safety of different firms. Intuitively, in good times, households are optimistic, spreads are compressed and firms issue debt, but when times turn sour, households become pessimistic, spreads rise and firms cut debt issuance and investment.13

To sum up, if households have diagnostic expectations and finance firms, we can microfound the financial conditions spread sprt and the stochastic volatility V(Xt-1) in the risk-adjusted IS curve 5. For our purposes, it is important that monetary policy can change neither the expectations formation process nor the shape of the debt contract between firms and investors, so the central bank takes these elements as given.

Letting

s p r t S P R t σ 0

and adopting a simplifying assumption that financial conditions ηt are primarily pinned down by spreads, and hence approximately equal up to some constant scaling factor, we arrive at

η t = b η t 1 ( 1 + θ ) b σ 1 σ γ η ε t + θ b 2 σ 1 σ γ η ε t 1 ( 7 )

which can be rearranged as

ε t ϱ η t + H η t 1 + ν ε t 1

where

ϱ σ γ η ( 1 + θ ) b σ 1 H σ γ η ( 1 + θ ) σ 1 ν θ b ( 1 + θ ) .

Under suitable convergence criteria (see Appendix C), equation (7) implies that financial conditions can be approximated by

η t u 1 η t 1 + u 2 η t 2 ( 1 + θ ) b σ 1 ε t

where, importantly, the terms omitted to arrive at this approximate relationship do not in any way depend on the conduct of monetary policy. Since we have already established that

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t ε t

where εtEtgt+1ξ/σ, therefore

ε t = y t g a p + E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t

and the process for financial conditions can be rearranged and expressed as

η t λ η η t 1 + λ η η η t 2 θ y y t g a p θ η E t y t + 1 g a p ( 8 )

where the coefficients in Equation 25 are given by,

λ η u 1 1 ( 1 + θ ) b σ 1 γ η λ η η u 2 1 ( 1 + θ ) b σ 1 γ η θ y ( 1 + θ ) b σ 1 1 + ( 1 + θ ) γ η θ η ( 1 + θ ) b σ 1 1 + ( 1 + θ ) γ η .

Here σ1 < 0 implies that θy > 0,θη > 0, which we impose when estimating the model.

Equation 8 would also approximately hold if, instead of relying on the diagnostic expectation model, we used a standard financial accelerator setup (Appendix C). Accordingly, and in line with the extended IS curve specification, our law of motion for financial conditions is compatible with at least two microfoundations, and can obtain under both diagnostic and rational expectations.

3.3. Links between financial conditions and the price of risk. We now highlight the relationship linking financial conditions ηt and the pricing kernel’s conditional volatility, i.e., the price of risk. As explained in Section 3.1 above, we can think of the household block as being entirely standard, with the resulting consumption-based log-SDF m˜t given by

m ˜ log ( M ˜ t ) = log ( β ξ t u ( C t ) ξ t 1 u ( C t 1 ) ) = log β + V ( X t 1 ) ε t y g a p σ ( y t g a p y t 1 g a p )

where, as done previously, we have exploited the assumption of a CRRA utility function, goods-market clearing ct ≡ yt and ytnat0implyingytgap=yt.14 An important feature of our four-equation specification is that the state variable is Xt = (ηt-1, ηt-2) and expanding the model by adding in a definition of the log-SDF would enlarge the set of states to Xt1={ηt1,ηt2,yt1gap}. The equilibrium solution for the log-SDF is of the form

m t = m ˜ t log β = a 1 η t 1 + a 2 η t 2 + a 3 y t 1 g a p + b m V ( η t 1 , η t 2 , y t 1 g a p ) ε t y g a p

where the ai’s and bm can be found by solving the linear, homoskedastic model, and where they are, respectively, the elements of A and B characterizing how the stochastic discount factor loads on the state variables and shock ϵtV(Xt)εtygap.15 It follows that the conditional mean and variance of mt are given by

E t m t + 1 = a 1 η t + a 2 η t 1 + a 3 y t g a p

and

E t ( m t + 1 E t m t + 1 ) 2 = E t ( b m V ( η t , η t 1 , y t g a p ) ε t + 1 y g a p ) 2 = ( b m V ( η t , η t 1 , y t g a p ) σ y ) 2 .

Expressed alternatively, after plugging in the definition of V (·, ·, ·), the conditional volatility of the log pricing kernel mt+1 can be expressed as16

v o l ( m t | F t 1 ) = | b m | σ y ( ν ϱ ˜ [ η t 1 , η t 2 , η t 3 , ε t 1 y g a p ] ) + ( 9 )

where x+ ≡ max{x, 0} and Ft-1 is the time t — 1 information set.17 This expression establishes that in our simple NKV model, the pricing kernel’s conditional volatility is piecewise-affine in η and the IS curve wedge εygap.18

The fact that ηt depends indirectly on interest rates, via the output gap in Equation (8) and the IS curve in Equation (5), is the “risk-taking channel” of monetary policy. The “vulnerability channel,” in contrast, is present because lower interest rates directly impact the price of risk and V(Xt), that is, the conditional volatility of output. It follows that when making monetary policy decisions, the policymaker has to consider not only the output-inflation tradeoff, but also an intertemporal risk-return tradeoff introduced by the “vulnerability channel.” While easier monetary policy leads to lower volatility, thus allowing short-term risk-taking, we will show that the lower short-run volatility is also associated with larger medium-term risk.19

4. Model Solution and Estimation We now discuss the strategy adopted to solve the model and estimate its parameter values.

4.1. Solution. The NKV comprises the following four equations

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t V ( X t 1 ) ε t y g a p ( 10 )
π t = β E t π t + 1 + κ y t g a p ( 11 )
η t = λ η η t 1 λ η η η t 2 θ y y t g a p + θ η E t y t + 1 g a p ( 12 )
i t = ϕ π π t + ϕ y y t g a p . ( 13 )

Clearly, the model is linear except for the presence of the vulnerability function V (·).

To solve the model, consider first an alternative linear specification, in which Equation 10 is replaced by

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t ε ˜ t y g a p .

The solution to this simplified model is of the following form,

Y t = A Y t 1 + B ε ˜ t y g a p ,

where Yt=[ytgap,πt,ηt,it] and A, B are constant matrices. We also know that the solution to this simplified linear model has the certainty equivalence property, which means that neither the A nor B matrix are affected by the volatility of ϵ˜tgap. This observation allows us to substitute in ϵ˜tgapV(Xt)εtygap and use

Y t = A Y t 1 + B V ( X t 1 ) ε t y g a p ( 14 )

as an approximation to the NKV model solution, effectively exploiting the independence of the A and B matrices from the vulnerability function V(·). Such an approach has two main advantages. First, the one-step-ahead conditional distributions remain tractably normal, allowing for quick, analytical evaluation of conditional moments. Importantly, however, neither the k-step ahead conditional distributions (with k > 1) nor the ergodic distributions are restricted to be Gaussian. Second, the evolution of the conditional mean is not affected by the specification of the vulnerability function, allowing us to split the estimation process into two steps, effectively separating the estimation of the “financial conditions” channel and the “financial vulnerabilities” channel (discussed in more detail in Section 4.3).

The main drawback of this first approach is that impulse responses and simulations cannot be directly obtained using standard model solution software. Accordingly, we additionally use pruned, second- and third-order perturbation approximations to the solution of the non-linear system (10) – (13). These can be readily obtained using Dynare (see also Adjemian, Bastani, Juillard, Karame, Maih, Mihoubi, Perendia, Pfeifer, Ratto, and Villemot, 2011), which can also be used to generate impulse responses, run simulations, and which provides a convenient cross-check on solution accuracy.20

4.2. Data. We use the log-difference between real GDP and the Congressional Budget Office’s estimate of potential as a measure of the output gap. We use annual core personal consumption expenditures (PCE) inflation and the National Financial Conditions Index (NFCI) compiled by the Federal Reserve Bank of Chicago. The NFCI aggregates 105 financial market, money market, credit supply, and shadow bank indicators to compute a single index using the filtering methodology of Stock and Watson (1998). The NFCI data start in 1973, and our estimation period is 1973 to 2017.

4.3. Parameter Estimation. Parameters of the NKV model can be split into two groups: the ones shared with the three-equation New Keynesian workhorse, and additional coefficients related to the evolution of financial conditions ηt and vulnerability V(X). We now discuss these in turn.

4.3.1. NK coefficients. For the first group of coefficients, common to both the NK and NKV models, we simply adopt the values proposed in Chapter 3 of the Gal´ı (2008) textbook (and reproduced in Table 1 for completeness). We also retain all structural parameter relationships, i.e., we have,

ω 1 α 1 α + α ε , λ ( 1 θ ) ( 1 β θ ) θ ω , κ λ ( σ + ( ϕ + α ) 1 α ) .
Table 1.

New Keynesian Parameter Values

article image

While this way of proceeding severely limits the degrees freedom we have in specifying the model, it also ensures that the NKV nests the NK model as a special case.21 This is important as it helps clarify that the differences in model properties that we document subsequently are not driven by different assumptions about the “standard” NK building blocks, but instead are due to the changes in propagation and amplification arising on account of endogenous risk.

4.3.2. Additional coefficients., We follow a procedure similar in spirit to that in King and Rebelo (1999), and find parameters that help our model match key moments of the data.

As shown in Table 3, we focused on four moments: autocorrelations of first differences of the output gap and its conditional mean, as well as their correlations with financial conditions.

Table 2.

Additional, Non-NK Parameter Values

article image
Table 3.

Fit to Targeted Moments

article image

Table 2 shows the values of the estimated parameters. We find these values by minimizing the unweighted sum of squared deviations of the ergodic moments from their estimated empirical targets. The signs and magnitudes of all the estimated parameters confirm our narrative and are compatible with the microfoundations we propose. They confirm the dependence of the output gap on financial conditions (γη ≠ = 0) and of financial conditions on the current and expected realizations of the output gap y, θη = 0). In addition, the autoregressive part of the financial conditions process evaluated at the estimated parameter values can be written as

η t 0.96 η t 1 + 1.01 Δ η t 1

indicating that the underlying process is both persistent (it will display cyclical dynamics) and sensitive to its past changes, with the latter feeding almost one-for-one into current FCI values.22

The values of the moments implied by the estimated parameters are shown in the second row of Table 3. As a relevant benchmark, we also show in the third line of the table the moments obtained by fitting a vector auto-regression (VAR) of the form:

η t = a η η η t 1 + a η η 1 η t 2 + a η ε ε t y g a p y t g a p = a y η η t 1 + a y η 1 η t 2 + a y ε ε t y g a p .

This VAR corresponds to the laws of motion for ηt and the output gap ytgap in the NKV model after the model has been solved. The coefficients of aη ≡ [aηη,aηη−1,aηε] and a ≡ [a ,ayη−1, a] are functions of the underlying structural parameters. The structural relationships in Equations (10) – (13) place complicated, non-linear restrictions on the coefficients of the aη and ay vectors. On the other hand, the VAR does not impose any of these restrictions. Hence, the fit of the VAR will always be better than the fit in the second row of Table 2. Comparing the three rows of the table, we see that the non-linear restrictions imposed by the NKV model do meaningfully discipline the model, yet the model-implied moments remain close to their unconstrained equivalents and the data.

In a similar fashion, we calibrate V (·) so that the NKV model delivers two empirical features displayed in Panel a of Figure 1:23 i) the conditional median and volatility of output gap growth correlate negatively, and ii) the negative bivariate relationship is characterized by a high R2 (0.91). The combination of these facts is important because it ensures the skewed dynamics of output gap growth quantiles discussed in Adrian, Duarte, Liang, and Zabczyk (2020), and also generates a smooth conditional 95th quantile, and a more volatile conditional 5th quantile.24 In other words, a model that delivers (i) and (ii) immediately generates a conditional output gap distribution that replicates its empirical counterpart.

Figure 1.
Figure 1.

Output Gap Growth Conditional Median and Volatility

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Panel (a) shows estimates of the conditional median and conditional volatility of output gap growth one quarter ahead. Panel (b) shows the conditional median and volatility simulated from the NKV model.

To have the 95th quantile constant, the vulnerability function would need to satisfy

V ( X t ) = Q 95 Φ 1 ( 0.95 ) b 11 σ y g a p A x ( 1 ) X t Φ 1 ( 0.95 ) b 11 σ y g a p ( 15 )

where we have exploited the fact that the elements of A(1) corresponding to non-state variables will equal zero, and where AX(1) denotes the truncation of A(1) to elements of the state vector. In words, under this specification, the (one step ahead) conditional output gap growth distribution is normal and fully described by conditional first and second moments that vary systematically with the state variables in a fashion which gives rise to the negative correlation documented in Figure 1.

As Equation 15 makes clear, the precise specification required to ensure a constant 95th conditional quantile – and hence also a conditional mean – volatility relationship in line with Panel a of Figure 1 – would imply that for some realizations of Xt, V (Xt) would need to become negative. To rule out that possibility, which is inconsistent with V(·)’s interpretation as conditional volatility, we modify the formula in Equation 15 and specify vulnerability as25

V ( X t ) max { ν ϱ X t , 0 } . ( 16 )

where

ν Q 95 Φ 1 ( 0.95 ) b 11 σ y g a p a n d ϱ A X ( 1 ) Φ 1 ( 0.95 ) b 11 σ y g a p .

As shown in Panel b of Figure 1 the resulting specification closely replicates both the negative slope and high R2 of the bivariate regression estimated on conditional moments and reproduced in Panel a.26

We conclude by dealing with a potential criticism: one could reasonably conjecture that our reliance on the vulnerability function provides a solution to an issue that doesn’t arise in more “quantitatively oriented” setups. Figure 2 illustrates why we don’t believe that to be the case, however. First, Panel (a) uses a linear NK model to reiterate the point that any linear or linearized model, irrespective of how “quantitatively relevant” these may be, will not generate a trade-off, because any such model implies a constant conditional volatility. Turning to non-linear specifications spanning the NK model (Panel b), the Bernanke, Gertler, and Gilchrist (1999) model (Panel c) and the Gertler and Karadi (2011) model (Panel d) we also see that popular non-linear models either generate a wrongly signed or much weaker relationship between the conditional moments (the R2 in the data for output growth exceeds 0.5).27

Figure 2.
Figure 2.

Output Gap Growth Conditional Median and Volatility in Selected DSGE Models

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Panels (a) – (d) depict scatter plots of the conditional median (y-axis) and conditional volatility (x-axis) of output growth one quarter ahead in four different models. The dashed grey line shows the slope of the estimated bivariate regression in the data (it is the same in all four panels), while the solid grey line shows the corresponding bivariate regression estimated on data simulated from the models (on a sample of identical length). The specification of the solid grey line, including the corresponding R2, is provided at the bottom of each figure.Since all model solutions were approximated using second order perturbation, which does not allow for analytical expressions for moments, therefore conditional moments were estimated by averaging over 100k simulations, relying on finite sample approximations to the law of large numbers. The magnitude of the corresponding sampling errors can be inferred from Panel (a), where the model is linear and so the true underlying conditional volatilities are constant and lie on a vertical line.

4.4. The Volatility Paradox. An important feature of the data – and the NKV model – is the volatility paradox (Brunnermeier and Sannikov, 2014), which refers to the observation that future risk builds during good times, when contemporaneous risk is low and growth is high. When η is low, indicating loose financial conditions, volatility is low in the short term. But this effect eventually reverts because risk-taking increases during good times, and the economy becomes more vulnerable to shocks as risks continue to build. This is shown in Figure 3 which depicts the elasticity of the conditional mean and conditional volatility of the output gap to η at projection horizons of up to 20 quarters.

Figure 3.
Figure 3.

The Volatility Paradox

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Elasticity of the conditional output gap median and volatility with respect to changes in η. Panel (a) shows estimates of the elasticity, while Panel ( b) shows estimates based on data simulated from the NKV model.

While the elasticity of the conditional volatility to financial conditions in the near term is negative, but becomes positive as the projection horizon lengthens, the elasticity of the conditional mean starts positive and then, after around 10 quarters, falls and becomes negative. Even though the extent of the changes in conditional volatility implied by the NKV falls somewhat short of those in the data, we reiterate that the blue line would be exactly flat in any linear (or linearized) model, and so the NKV arguably outperforms those. This was, however, one of the main empirical tensions when specifying the NKV: while we succeeded in identifying specifications in which movements in conditional volatility were more pronounced, these typically implied excessive degrees of unconditional volatility and counterfactually volatile simulations. Investigating whether more quantitatively oriented, and possibly more complex, specifications can succeed in addressing these tensions would, in our view, constitute an interesting extension of our work.28

4.5. The Dynamics and Correlates of Financial Vulnerability. Since the estimation process described previously offered no guarantee that the resulting financial vulnerability function would be economically interpretable or linked to the price of risk therefore this section briefly takes both of these issues head on. To that effect, to obtain a model-implied time series of vulnerability we evaluated the specification in Equation 16 at the historical values of financial conditions and the output gap, with the result plotted as a solid red line in Figure 4. Since we motivated the NKV extension by highlighting the links between the price of risk and spreads, and by stressing the importance of the risk taking channel of monetary policy, we therefore compared our estimate of vulnerability to the term spread (Estrella and Mishkin, 1998) and the real federal funds rate, which are the dashed blue lines in Panels a and b of Figure 4 respectively.

Figure 4.
Figure 4.

Vulnerability, Spreads and the Risk-Taking Channel

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The term spread is the FRED series T10Y3M which is the difference between the 10-Year Treasury Constant Maturity Yield and the 3-Month Treasury Constant Maturity Yield. Data for 1971Q1 to 1981Q4 are (minus) the termspr variable from the data set accompanying Gilchrist and Zakrajˇsek (2012) (the spread series is a quarterly average, expressed in annualized percentage points and not seasonally adjusted). The real federal funds rate (plotted on an inverse scale) is the effective federal funds rate (FEDFUNDS) minus 12-month core PCE inflation (PCEPILFE).

The charts show that vulnerability closely co-moves with both series, with the corresponding correlation coefficients equal to 0.53 and -0.47 respectively. In other words, according to our model, contemporaneous vulnerability tends to be low when the term spread is compressed or when the real effective interest rate is high. As highlighted in Sections 4.4 and 3.3, there is more to the NKV than the contemporaneous correlation, however, with the model also suggesting that vulnerability tends to slowly build over time, i.e., periods of low contemporaneous vulnerability are also the ones in which policy makers ought to carefully consider future risks.29

4.6. Fan Charts. We conclude this section by assessing the degree of amplification built into the model, implicitly testing whether the specification of vulnerability doesn’t imply unrealistic fan chart dynamics. To verify that, Figure 5 compares fan charts from a linear version of the model – in which vulnerability plays no role – to their equivalents generated using a second-order accurate approximation to the solution. In both cases the model is initialized in a synthetic, “high vulnerability” state defined as one in which initial conditions were an average of those in the top percentile of most vulnerable states from a long simulation.

Figure 5.
Figure 5.

Fan Charts in a High Vulnerability State

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The purple fan charts are from a version of the NKV in which the non-linearity associated with vulnerability is effectively disabled by relying on a linear approximation to the solution. These are compared to fan charts generated using a second-order approximation to the solution (red lines), in which endogenous amplification is allowed to play a role. Both models were initialized in a ‘high vulnerability’ state, defined as one in which initial conditions were an average of those in 10,000 states (1%) with highest vulnerability from a 1,000,000 period simulation.

This synthetic state is characterized by low implied values of output gap growth and positive inflation, and we see that while the non-linear model is less sanguine on growth and inflation going forward, the shifts in the tails are of an order of 0.5pp for the output gap growth and 0.2pp for inflation, with even smaller implications for financial conditions. Thus, if anything, the NKV appears to err on the side of caution. We have verified, however, that the impact of vulnerability is magnified in specifications in which the conditional volatility of output gap growth is more responsive to financial conditions, and so we conjecture that the fan chart implications would be more pronounced in extensions which can resolve the tension between conditional and unconditional volatility discussed previously.

4.7. Out of Sample Forecast Performance. Given the aforementioned usefulness of the price of risk in forecasting future growth and recessions, and the tight links in the NKV between the price of risk and financial conditions (documented in Section 3.3), we conclude this section by investigating the model’s forecasting performance. In that spirit, Figure 6 compares forecasts to actual outcomes, with Panels a, b, c and d focusing on financial conditions at horizons of one quarter to two years respectively, and Panels e and f focusing on the 1 and 2 quarter ahead horizons for output gap growth. In all these figures actual outcomes are plotted as a red solid line, while the NKV forecasts are captured by the blue dashed line, with the implied residual represented by the grey swathe.

Figure 6.
Figure 6.

NKV Forecast Performance

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The output gap growth forecasts one quarter ahead depends on the realization of the output gap today, which is perfectly observed. To avoid giving the model an unfair advantage we therefore plot the difference between the two (three) quarter ahead forecast of yt+1gap minus one (two) quarter ahead forecast of ytgap in panels (e) and (f) respectively.

Figure 6 shows that, despite its parsimony, the NKV does a good job of forecasting financial conditions, providing independent validation of our specification in Equation 10. Unsurprisingly, the NKV performs somewhat less satisfactorily in the early part of the sample, which is likely to have been characterized by a looser conduct of monetary policy and oil price shocks, neither of which it accounts for. Furthermore, and as confirmed by Table 4, we also see that the forecasts tend to grow quite noisy, particularly as the forecast horizon becomes longer. Finally, we note that the model did a very good job of predicting the tightening of financial conditions associated with the global financial crisis one year before it occurred. According to the NKV, that spike in financial market tightness was meant to be followed by an even larger spike trumping anything witnessed since the early 1970s. The fact that this never materialized could either be used to critique the model or as an implicit measure of the efficacy of stabilization policies, which our setup does not account for.

Table 4.

Correlation Between Out of Sample Predictions and Outcomes

article image

Even though the NKV doesn’t actually use any output gap or inflation data in generating the forecasts, relying purely on two lags of financial conditions, the model output gap predictions correlate positively with outcomes, with Table 4 confirming that they slowly become more noisy / less accurate as the horizon increases. The picture for inflation is far less benign, with negative correlations up until 7 quarters out and extremely volatile forecasts even one quarter ahead. We believe the latter may reflect the tension between generating realistic values of conditional and unconditional volatilities discussed previously. More generally, the poor forecast performance was to be expected, as we already noted that financial conditions don’t help predict inflation, with past lags of inflation, which the NKV does not rely on, providing a much better guide to future outcomes.

To put the numbers in Table 4 in perspective, we note that the simple three equation NK model driven by i.i.d. shocks, which the NKV builds on, would have all the entries equal to zero.30 While this comparison is somewhat unfair, because the NKV does utilize two state variables in making its predictions, it suggests considerable benefits stemming from the joint modeling of real activity indicators and financial conditions. Investigating the relative value added by different state variables in a more quantitatively oriented version of the NKV would, we believe, make for an interesting extension of our work.

5. Monetary and Macroprudential Interactions

The NKV framework, with its focus on endogenous output risk, is well suited to analyze monetary and cyclical macroprudential policies simultaneously, as it is empirically relevant and yet remains conveniently tractable. Accordingly, having validated the NKV, we now use it to study the monetary and macroprudential implications of accounting for financial vulnerabilities.

We expand the NKV model to study the joint determination of monetary and macropruden-tial policy with a hypothetical policy instrument that impacts the level of financial conditions η: tighter macroprudential policy is assumed to increase the price of risk and, via the financial accelerator effect, it also impacts output growth. More specifically, we assume that a state contingent macroprudential tool µt is capable of affecting contemporaneous financial conditions, that is, that

η t = μ t + λ η η t 1 + λ η η η t 2 θ y y t g a p θ η E t y t + 1 g a p .

In the remainder of this section we clarify when the use of cyclical macroprudential tools can mitigate downside risks to GDP, and we also show that aggressive monetary policy in the absence of suitable macroprudential “cushioning” can lead to real and financial instability. The following section then discusses how the conduct of monetary policy can be modified to reap most of the stabilization benefits when suitable macroprudential tools are not available.

5.1. The Separation Principle. We first illustrate the possibility that a combination of macroprudential policy and monetary policy achieves full stabilization. To that effect, we posit that macroprudential policy satisfies,

μ t = ν η η t 1 + ν η η η t 2

which immediately implies that the semi-structural specification of the relationship linking real and financial conditions becomes

η t = ( λ η + ν η ) η t 1 + ( λ η η + ν η η ) η t 2 θ y y t g a p θ η E t y t + 1 g a p .

Figure 7, illustrates that if the policy coefficients νη and νηη are chosen such that the process

η t = ( λ η + ν η ) η t 1 + ( λ η η + ν η η ) η t 2
Figure 7.
Figure 7.

IRFs for Increasingly “Activist” Monetary Policy Rules under a Stabilizing Macroprudential Policy Affecting η

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Each progressively brighter line corresponds to a doubling of the baseline Taylor rule coefficients on inflation (1.5) and the output gap (0.125). Since ten rules are compared (aside from the baseline) therefore the coefficients of the most “aggressive” rule equal 210 × [1.5, 0.125] = [1536, 128] (on inflation and the output gap respectively).

is stable, then increasingly aggressive monetary policy can achieve outcomes arbitrarily close to full stabilization. This, naturally, would be a desirable outcome.

Importantly, even if macroprudential policy only affected financial conditions with a lag, for example, in the following fashion:

η t = μ t 1 + λ η η t 1 + λ η η η t 2 θ y y t g a p θ η E t y t + 1 g a p

then a specification in which

μ t = ν η η t + ν η η η t 1

would still make it possible for monetary policy to fully stabilize the economy. More generally, any macroprudential rule, for which

μ t + λ η η t 1 + λ η η η t 2

is a stable linear process would allow this to hold. And of course, the stability properties of an AR(k) process depend on their corresponding k-th order characteristic polynomials. So, in principle, systematically affecting financial conditions at any lag may create conditions under which strict separation of monetary and macroprudential policies leads to efficient outcomes.

Moving away from the confines of the model, our results highlight that real-world macroprudential policy would need to ensure that financial conditions remain stable even during periods such as the Great Moderation, when the temptation may be to increase risk exposures and hope for stability to persist. If this prerequisite is not satisfied, then the buildup in vulnerabilities, proxied by our V (·) function, could mean that a small shock is all it takes to start off an intrinsically unstable spiral of events (of which only policies much richer than those accounted for in our model could be capable of stabilizing). We now illustrate that possibility.

5.2. Minsky Redux. Given that our setup nests the three-equation NK model, it is perhaps most natural to first consider whether standard NK policy prescriptions carry over in the absence of stabilizing macroprudential interventions. To that effect we note that, by construction, our model is one in which the “divine coincidence” holds: the only shock is isomorphic to a demand shock and directly affects only the dynamic IS curve. As such, it would seem natural to expect that optimal policy would entail full stabilization of both inflation and the output gap. We also know from the standard NK model that while a Taylor rule does not fully stabilize the economy, it can approximate that outcome arbitrarily well (Gal´ı, 2008, p.114): as the weights on inflation or the output gap increase, the demand shock would have less and less of an impact (as illustrated in Panel (a) of Figure 8).31 Since the case of a standard Taylor rule forms our benchmark, we ask whether an increasingly “activist” monetary policy rule would also deliver full stabilization in our proposed NKV setup.

Figure 8.
Figure 8.

IRFs under Increasingly “Activist” Monetary Policy Rules

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Each progressively brighter line corresponds to a doubling of the baseline Taylor rule coefficients on inflation (1.5) and the output gap (0.125). Since ten rules are compared (aside from the baseline) therefore the coefficients of the most “aggressive” rule equal 210 x [1.5,0.125] = [1536,128] (on inflation and the output gap respectively). Missing lines in the RHS panels correspond to instability on account of violations of the Blanchard Kahn conditions, which occurs whenever the coefficients on inflation and the output gap increase by more than 2.1 times (see also Footnote 32 for a discussion).

There are good reasons to expect such a result to hold. First, if monetary policy was able to achieve full stabilization, then both the level and the expectation of the output gap would equal their respective steady state of zero. Accordingly, in such circumstances, the process for financial conditions ηt would approximately reduce to

η t + λ η η t 1 + λ η η η t 2 .

This shows that under full output gap stabilization, ηt would only depend on its own lags. It follows that if we initialized the system in its steady state, then financial conditions would stay in that steady state forever. As a consequence, vulnerability would also be constant, because

V ( X t ) max { ν ϱ X t , 0 } lim X t 0 V ( X t ) = ν .

With constant vulnerability, our model would become linear and homoskedastic, and in that case, we know that an aggressive Taylor rule can deliver full output gap stabilization. As such, it would seem that even though the NKV model is nonlinear (and hence the problem of optimal policy under discretion is no longer tractably linear-quadratic), a sufficiently aggressive monetary policy rule should be able to achieve full stabilization.

While intuitively compelling, Panel (b) of Figure 8 demonstrates that the argument fails to apply to the NKV model. Taylor rule coefficients cannot be increased without bound: once they get too large, the model becomes explosive, which accounts for the missing impulse responses in Figure 8 (b).32

The underlying story has a theme familiar from Minsky (1992): too much stability is capable of breeding instability. As we show below, the fact that monetary policy is fixated on inflation and output gap volatility implies that when the corresponding Taylor rule weights are increased, financial conditions will become unstable. Since financial conditions directly affect the real economy, our model highlights the possibility that a period of low volatility, such as the Great Moderation, may result in a build-up of vulnerability and undesirable real outcomes subsequently (as in Bernanke, 2012).

To illustrate what exactly is happening, consider again the process for financial conditions

η t = λ η η t 1 + λ η η η t 2 θ y y t g a p θ η E t y t + 1 g a p .

This specification comprises backward-looking autoregressive components along with forward-looking endogenous variables, namely the contemporaneous and expected levels of the output gap (i.e., ytgapandEtyt+1gap respectively). In equilibrium this specification, combined with all the other market clearing and optimality conditions, gives rise to a “solved” specification for ηt of the following form33

η t = a η η η t 1 + a η η 1 η t 2 + a η ε ε t y g a p .

Crucially, and as discussed in Section 4.1, the coefficients aηη and aηη-1 (elements of the solution matrix A) will be different from the λη and ληη in the semi-structural form. It is also the case that our equilibrium aηη and aηη-1 corresponding to the baseline specification imply a stable AR(2) process. In other words, agents’ expectations of output gap and expected output gap volatility, along with the lag structure built into our process for financial conditions, imply a stable process for ηt.

We can now consider what happens when monetary policy becomes increasingly aggressive in targeting inflation and the output gap. As argued above, if both ytgapandEtyt+1gap almost surely converge to zero (in a probabilistic sense), then the coefficients of their equilibrium laws of motion, that is,

y t g a p = a y η η t 1 + a y η 1 η t 2 + a y ε ε t y g a p E t y t + 1 g a p = a y + η η t 1 + a y + η 1 η t 2 + a y + ε ε t y g a p

also have to converge to zero (i.e., we would have x{η,η1,ε}limytgapas0ayx=limytgapas0ay+x=0. It follows that in this particular situation, because the impact of the endogenous components vanishes, the coefficients aηη and aηη-1 actually converge to their “solved” counterparts:

lim y t g a p a s 0 a η η = λ η a n d lim y t g a p a s 0 a η η 1 = λ η η .

As a consequence, if

η t = λ η η t 1 + λ η η η t 2

happens to be an unstable process, then eliminating output gap volatility, somewhat paradoxically, pushes the equilibrium specification from ηt=aηηηt1ηt2+ayεεtygap, which was stable, toward ηt=ληηηt1+ληηηt2+γ˜εεty which is not!

This is precisely what happens in the NKV model, in which the AR coefficients in the specification for ηt = 1.97ηt-1 1.01ηt-2 are unstable, but the reduced form process corresponding to a standard Taylor rule ends up with different, stable coefficients. This is also why monetary policy that ends up being too successful in stabilizing the output gap runs the risk of destabilizing financial conditions, and, ultimately, the entire economy.34

Our model thus points to the possibility, absent from the standard NK setup, that having a central bank focused solely on eliminating inflation and output gap volatility may be suboptimal. By not paying attention to the endogenous nature of financial conditions, the central bank risks inadvertently making them unstable. While this does not automatically have to hold in our setup, and indeed, the macroprudential section highlights when full stabilization may be possible, we believe this eventuality is important enough to highlight and consider more seriously. Alternatively, it is also possible to have monetary policy explicitly depend on expected financial conditions, which, as we shall show and explain in Section 6, can also improve upon the outcome associated with a standard Taylor rule.

6. Efficient Monetary Stabilization sans Macroprudential Tools

We now highlight the potential benefits of using the NKV to study second-best type policies – i.e., we focus on situations in which the policy maker’s mandate specifies more objectives than tools. In particular we study the problem of the central bank / monetary policy authority attempting to eliminate underlying inefficiencies when suitable macroprudential tools are not available.

To that effect we compare responses under a standard Taylor (1993) rule (solid line in Figure 9) to responses under an alternative “expanded” Taylor rule, inspired by the “optimal” rule of Adrian and Duarte (2018), in which interest rates depend additionally on the expected price of risk (dashed line in Figure 9):

i t = ϕ π π t + ϕ y y t g a p ϕ η E t η t + 1 ( 17 )

where φη is set equal to 0.1.

Figure 9.
Figure 9.

Alternative Policy Paths with Endogenous Risk

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

For the impulse responses depicted in Figure 9, we initialize the model by setting initial conditions based on economy-specific η volatilities. For example, the solid lines depict responses in a model in which η is set to one standard deviation below its steady state, and where the standard deviation is computed under a standard Taylor rule; that is, η0 = η-1 = -ση. In this case, loose financial conditions are associated with a positive output gap and higher levels of inflation (top two panels) leading the central bank to tighten rates by just over 25bps (bottom left panel). This eventually results in falls in inflation and the output gap, and a gradual tightening of financial conditions. Crucially, under the standard Taylor rule, financial conditions “overshoot,” leading to elevated vulnerability after the ninth quarter. The higher vulnerability is consistent with evidence that the observed amplitude of financial cycles has increased since the 1980s when monetary policy became more focused on price stability and financial regulations were easing (Drehmann, Borio, and Tsatsaronis, 2012)

Under the extended Taylor rule of Equation (17), the volatility of η is much smaller than under the standard Taylor rule because policymakers additionally account for fluctuations in financial conditions. That is, the extended Taylor rule effectively eliminates periods of very tight (and very loose) financial conditions, whereas the standard rule does not. What the figure illustrates is that when η is set to one standard deviation below its steady state under the extended Taylor rule, the responses are much less volatile. That is, when the responses based on “economy-specific” η volatilities are compared, the extended Taylor rule delivers markedly lower volatility for all the variables of interest.

Figure 10 compares the ergodic distributions of η clarifying that extreme realizations are much less likely under the extended Taylor rule. This also translates into less output gap volatility, as shown in Panel (a) of Figure 11.35 In particular, under the extended Taylor rule, outcomes closer to the mean are more likely, precisely because the vulnerability-conscious approach is more effective at eliminating states of high output gap volatility. Because risk-averse agents would prefer less output gap volatility, the evidence in Figure 11 suggests an additional reason why they might prefer the extended rule over the standard one. Of course, if a volatile output gap was associated with additional inefficiencies, as is the case in the standard NK model, that would only provide more reasons to seriously consider the extended Taylor rule of Equation (17).

Figure 10.
Figure 10.

Ergodic Distribution of Financial Conditions

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Figure 11.
Figure 11.

Ergodic Output Gap Distribution

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

The apparent effectiveness of monetary policy conditioned on η raises two closely related questions: i) does it mitigate, or perhaps even eliminate, the risk of instability due to Blanchard Kahn violations discussed in Section 5.2?; and ii) are there circumstances in which it is preferred to a combination of standard monetary and macroprudential policies analyzed in Section 5.1?

The answer to the first of these questions is negative. The inherent instability is due to the specification of the process for financial conditions η, and, as in Section 5, once monetary policy engineers an output gap which is “too” stable, financial conditions start being explosive, eliminating all stable equilibria.36

The answer to the second question follows from observing that the extended Taylor rule doesn’t fully eliminate inflation and output gap volatility even when the largest stable coefficients are used. In contrast, a combination of aggressive monetary policy augmented by macroprudential policy can approximate full stabilization arbitrarily closely. As such, the latter combination of policies would always constitute a preferred choice. Interestingly, our results chime with the empirical findings of Brandao-Marques, Gelos, Narita, and Nier (2020), which suggest that macroprudential policies are effective in dampening downside risks to growth stemming from the build-up of financial vulnerabilities, and that the trade-off for monetary policy acting alone is considerably worse.

One question that has often been posed in related contexts is whether the NKV would have called for more aggressive monetary policy in the aftermath of the GFC, and, if so, how much more aggressive should policy have been? Taking as a guide the policy rule featuring financial conditions, which we have shown successfully stabilizes ηt, and combining the fact that the coefficient on expected financial conditions is -0.1 with the observation that our FCI series has historically taken values between -4 and 1.4 suggests that policy should have been, at most, around 160bp tighter, on an annual basis, i.e., the quantitative differences would appear moderate .

Crucially, however, such back-of-the-envelope comparisons ignore the fact that systematic changes in the conduct of policy would have affected the trajectories of the endogenous variables appearing in a Taylor rule (i.e., the Lucas, 1976, critique) and indeed could have eliminated the GFC altogether. Furthermore, in an actual monetary policy setting, any decision would require policymakers’ judgment, given lack of precision in measuring the output gap in real time. Moreover, any outcome would also reflect the ability of policymakers to communicate objectives clearly and credibly commit to implementing them (which was implicitly assumed in all the exercises considered here).

Our key takeaway is that policy decisions, whether intentionally or not, affect the price of risk and so can have a marked impact on the dynamics of inflation and the output gap. We argue that in such an environment, monetary policy should aim to curb vulnerability and the excessive volatility of the output gap associated with it.

7. Summary and Conclusions

In this paper, we presented an expanded New Keynesian model of aggregate macroeconomic fluctuations that includes financial conditions and that can match the conditional distributions of the output gap and inflation. Our NKV model tightly links the price of risk, defined as the conditional volatility of the stochastic discount factor, to the evolution of financial conditions, giving rise to a “vulnerability channel” of monetary policy. In the model, higher vulnerability is associated with greater amplification of output gap shocks, and the dependence of financial conditions on endogenous variables ensures that changes in policies can systematically affect their dynamics. The latter has profound implications for the optimal conduct of monetary and macroprudential policy.

To empirically validate our model, we match some stylized facts for the conditional distribution of the output gap, presented in Adrian, Boyarchenko, and Giannone (2019) and Adrian and Duarte (2018). Loose financial conditions are associated with expected high mean and low volatility of the conditional distribution of the output gap for one- and four-quarters-ahead. The conditional mean and volatility of output gap growth are negatively correlated contemporaneously, giving rise to left-skewed conditional and ergodic distributions. At the same time, loose financial conditions are not associated with higher expected inflation or inflation volatility. Finally, loose financial conditions which are associated with low conditional volatility of output growth in the near term are also associated with higher conditional volatility in the medium term, as presented in Adrian, Grinberg, Liang, and Malik (2018). That is, the term structure of lower quantiles of output gap growth (called Growth at Risk) is upward sloping when the initial price of risk is high, but downward sloping when the initial price of risk is compressed. Importantly, the term structures cross one another over the projection horizon, illustrating the future costs of an initially compressed price of risk, and the model also does a good job of forecasting the dynamics of financial conditions out of sample.

In our setup monetary policy changes the future path of output and inflation, but also the future path of vulnerability. Policymakers can ease monetary policy to reduce near-term downside risks to growth via the impact on risk-taking. But the near-term reduction comes at a cost of higher risks to growth in the medium term if the effects of monetary policy on vulnerability are ignored. To illustrate the resulting intertemporal tradeoff and to articulate the monetary policy implications of having the additional amplification and propagation mechanisms operate, we compare the NKV model’s predictions to those of the three equation New Keynesian workhorse. In line with Curdia and Woodford (2010) we find that augmenting a monetary policy rule with expected financial conditions can mitigate inefficient fluctuations. In addition, the introduction of a cyclical macroprudential policy implemented as an offset to financial conditions, together with standard monetary policy, can deliver full stabilization of the output gap, inflation, and financial conditions.

However, while our model closely resembles an NK setup, in which the absence of tradeoff-inducing shocks implies a “divine coincidence” (Blanchard and Gal´ı, 2007), standard policy prescriptions – that is, attempting to fully stabilize inflation and the output gap – turn out to be problematic. In fact, our model could be considered a stylized and concise illustration of how the Great Moderation (Bernanke, 2012) and the Great Recession are connected: changes in the dynamics of the output gap have a direct impact on the equilibrium law of motion of financial conditions, with “too much” output-gap stability breeding financial condition instability. Expressed alternatively, by not paying attention to the endogenous component of financial conditions, the central bank risks inadvertently making them unstable.

References

  • Adam, K., and S. Merkel (2019): “Stock Price Cycles and Business Cycles,” CEPR Discussion Papers 13866, Centre for Economic Policy Research.

    • Search Google Scholar
    • Export Citation
  • Adjemian, S., H. Bastani, M. Juillard, F. Karame, J. Maih, F. Mihoubi, G. Peren-dia, J. Pfeifer, M. Ratto, and S. Villemot (2011): “Dynare: Reference Manual Version 4,” Dynare Working Papers 1, CEPREMAP.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., and N. Boyarchenko (2015): “Intermediary Leverage Cycles and Financial Stability,” Federal Reserve Bank of New York Staff Reports, 567.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., N. Boyarchenko, and D. Giannone (2019): “Vulnerable Growth,” American Economic Review, 109 (4), 126389.

  • Adrian, T., and F. Duarte (2018): “Financial Vulnerability and Monetary Policy,” CEPR Discussion Papers 12680, Centre for Economic Policy Research.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., F. Duarte, N. Liang, and P. Zabczyk (2020): “NKV: A New Keynesian Model with Vulnerability,” AEA Papers and Proceedings, 110, 47076.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., F. Grinberg, N. Liang, and S. Malik (2018): “The Term Structure of Growth-at-Risk,” IMF Working Papers 18/180, International Monetary Fund.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., and H. S. Shin (2010): “Liquidity and leverage,” Journal of Financial Intermediation, 19 (3), 418437.

  • Adrian, T., and H. S. Shin (2014): “Procyclical Leverage and Value-at-Risk,” Review of Financial Studies, 27 (2), 373403.

  • Altunbas, Y., L. Gambacorta, and D. Marques-Ibanez (2010): “Bank risk and monetary policy,” Journal of Financial Stability, 6 (3), 121129.

    • Search Google Scholar
    • Export Citation
  • Andreasen, M. M., J. Fernandez-Villaverde, and J. F. Rubio-Ramirez (2018): “The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications,” Review of Economic Studies, 85 (1), 149.

    • Search Google Scholar
    • Export Citation
  • Bernanke, B. S. (2012): “The Great Moderation,” in The Taylor Rule and the Transformation of Monetary Policy, ed. by E. F. Koenig, R. Leeson, and G. A. Kahn, Book Chapters, chap. 6. Hoover Institution, Stanford University.

    • Search Google Scholar
    • Export Citation
  • Bernanke, B. S. (2018): “The real effects of disrupted credit: Evidence from the Global Financial Crisis,” Brookings Papers on Economic Activity, 2018 (2), 251342.

    • Search Google Scholar
    • Export Citation
  • Bernanke, B. S., and A. S. Blinder (1988): “Credit, Money, and Aggregate Demand,” American Economic Review, 78 (2), 435439.

  • Bernanke, B. S., M. Gertler, and S. Gilchrist (1999): “The financial accelerator in a quantitative business cycle framework,” in Handbook of Macroeconomics, ed. by J. B. Taylor, and M. Woodford, vol. 1 of Handbook of Macroeconomics, chap. 21, pp. 13411393. Elsevier.

    • Search Google Scholar
    • Export Citation
  • Blanchard, 0., and J. Galí (2007): “Real Wage Rigidities and the New Keynesian Model,” Journal of Money, Credit and Banking, 39(s1), 3565.

    • Search Google Scholar
    • Export Citation
  • Blanchet-Scalliet, C., and M. Jeanblanc (2020): “Enlargement of Filtration in Discrete Time,” in From Probability to Finance: Lecture Notes of BICMR Summer School on Financial Mathematics, ed. by Y. Jiao, Mathematical Lectures from Peking University, chap. 2, pp. 71144. Springer Singapore.

    • Search Google Scholar
    • Export Citation
  • Bordalo, P., N. Gennaioli, S. Y. Kwon, and A. Shleifer (2020): “Diagnostic bubbles,” Journal of Financial Economics.

  • Bordalo, P., N. Gennaioli, R. Porta, and A. Shleifer (2019): “Diagnostic Expectations and Stock Returns,” Journal of Finance, 74 (6), 28392874.

    • Search Google Scholar
    • Export Citation
  • Bordalo, P., N. Gennaioli, and A. Shleifer (2018): “Diagnostic Expectations and Credit Cycles,” The Journal of Finance, 73 (1), 199227.

    • Search Google Scholar
    • Export Citation
  • Bordalo, P., N. Gennaioli, A. Shleifer, and S. Terry (2019): “Real Credict Cycles,” mimeo, Harvard University.

  • Brandao-Marques, L., G. Gelos, M. Narita, and E. Nier (2020): “Leaning Against the Wind: An Empirical Cost-Benefit Analysis,” mimeo, International Monetary Fund.

    • Search Google Scholar
    • Export Citation
  • Brunnermeier, M. K., and L. H. Pedersen (2009): “Market Liquidity and Funding Liquidity,” Review of Financial Studies, 22 (6), 22012238.

    • Search Google Scholar
    • Export Citation
  • Brunnermeier, M. K., and Y. Sannikov (2014): “A Macroeconomic Model with a Financial Sector,” American Economic Review, 104 (2), 379421.

    • Search Google Scholar
    • Export Citation
  • Brunnermeier, M. K., and Y. Sannikov (2016): “The I Theory of Money,” NBER Working Papers 22533, National Bureau of Economic Research, Inc.

    • Search Google Scholar
    • Export Citation
  • Caballero, R. J., and A. Simsek (2019): “Prudential Monetary Policy,” NBER Working Papers 25977, National Bureau of Economic Research, Inc.

    • Search Google Scholar
    • Export Citation
  • Calvo, G. A. (1983): “Staggered prices in a utility-maximizing framework,” Journal of Monetary Economics, 12(3), 383398.

  • Campbell, J. Y., and J. Cochrane (1999): “Force of Habit: A Consumption-Based Explanation of Aggregate Stock Market Behavior,” Journal of Political Economy, 107 (2), 205251.

    • Search Google Scholar
    • Export Citation
  • Chodorow-Reich, G. (2014): “Effects of Unconventional Monetary Policy on Financial Institutions,” NBER Working Papers 20230, National Bureau of Economic Research, Inc.

    • Search Google Scholar
    • Export Citation
  • Christiano, L. J., R. Motto, and M. Rostagno (2014): “Risk Shocks,” American Economic Review, 104 (1), 2765.

  • Clarida, R., J. Gali, and M. Gertler (1999): “The Science of Monetary Policy: A New Keynesian Perspective,” Journal of Economic Literature, 37 (4), 16611707.

    • Search Google Scholar
    • Export Citation
  • Cogley, T., and J. M. Nason (1995): “Output Dynamics in Real-Business-Cycle Models,” American Economic Review, 85 (3), 492511.

  • Curdia, V., and M. Woodford (2010): “Credit Spreads and Monetary Policy,” Journal of Money, Credit and Banking, 42(s1), 335.

  • Curdia, V., and M. Woodford (2016): “Credit Frictions and Optimal Monetary Policy,” Journal of Monetary Economics, 84(C), 3065.

  • Dell’Ariccia, G., L. Laeven, and G. A. Suarez (2017): “Bank Leverage and Monetary Policy’s Risk-Taking Channel: Evidence from the United States,” Journal of Finance, 72 (2), 613654.

    • Search Google Scholar
    • Export Citation
  • Drehmann, M., C. Borio, and K. Tsatsaronis (2012): “Characterising the financial cycle: don’t lose sight of the medium term!,” BIS Working Papers 380, Bank for International Settlements.

    • Search Google Scholar
    • Export Citation
  • Estrella, A., and F. S. Mishkin (1998): “Predicting US Recessions: Financial Variables as Leading Indicators,” Review of Economics and Statistics, 80 (1), 4561.

    • Search Google Scholar
    • Export Citation
  • Filardo, A., and P. Rungcharoenkitkul (2016): “A quantitative case for leaning against the wind,” BIS Working Papers 594, Bank for International Settlements.

    • Search Google Scholar
    • Export Citation
  • GALI, J. (2008): Monetary Policy, Inflation, and the Business Cycle: an Introduction to the New Keynesian Framework and its Applications. Princeton University Press.

    • Search Google Scholar
    • Export Citation
  • Gertler, M., and P. Karadi (2011): “A model of unconventional monetary policy,” Journal of Monetary Economics, 58 (1), 1734.

  • Gertler, M., and N. Kiyotaki (2010): “Financial Intermediation and Credit Policy in Business Cycle Analysis,” in Handbook of Monetary Economics, ed. by B. M. Friedman, and M. Woodford, vol. 3 of Handbook of Monetary Economics, chap. 11, pp. 547599. Elsevier.

    • Search Google Scholar
    • Export Citation
  • Gertler, M., and N. Kiyotaki (2015): “Banking, Liquidity, and Bank Runs in an Infinite Horizon Economy,” American Economic Review, 105 (7), 20112043.

    • Search Google Scholar
    • Export Citation
  • Gilchrist, S., and E. Zakrajsek (2012): “Credit spreads and business cycle fluctuations,” American Economic Review, 102 (4), 16921720.

    • Search Google Scholar
    • Export Citation
  • Greenwood, R., S. G. Hanson, and L. J. Jin (2019): “Reflexivity in credit markets,” National Bureau of Economic Research Working Paper.

    • Search Google Scholar
    • Export Citation
  • He, Z., and A. Krishnamurthy (2013): “Intermediary Asset Pricing,” American Economic Review, 103 (2), 73270.

  • Jimenez, G., S. Ongena, J.-L. Peydro, and J. Saurina (2012): “Credit Supply and Monetary Policy: Identifying the Bank Balance-Sheet Channel with Loan Applications,” American Economic Review, 102 (5), 23012326.

    • Search Google Scholar
    • Export Citation
  • King, R. G., and S. T. Rebelo (1999): “Resuscitating real business cycles,” in Handbook of Macroeconomics, ed. by J. B. Taylor, and M. Woodford, vol. 1 of Handbook of Macroeconomics, chap. 14, pp. 9271007. Elsevier.

    • Search Google Scholar
    • Export Citation
  • Lucas, R. J. (1976): “Econometric policy evaluation: A critique,” Carnegie-Rochester Conference Series on Public Policy, 1 (1), 1946.

    • Search Google Scholar
    • Export Citation
  • Minsky, H. P. (1992): “The Financial Instability Hypothesis,” Economics Working Paper Archive 74, Levy Economics Institute.

  • Rajan, R. G. (2005): “Has financial development made the world riskier?,” Proceedings-Economic Policy Symposium – Jackson Hole, Aug, 313369.

    • Search Google Scholar
    • Export Citation
  • Rotemberg, J. J. (1982): “Sticky Prices in the United States,” Journal of Political Economy, 90 (6), 11871211.

  • Sims, E. R., and J. C. Wu (2019): “The Four Equation New Keynesian Model,” Working Paper 26067, National Bureau of Economic Research.

    • Search Google Scholar
    • Export Citation
  • Smets, F., and R. Wouters (2007): “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach,” American Economic Review, 97 (3), 586606.

    • Search Google Scholar
    • Export Citation
  • Svensson, L. E. (2017): “Cost-benefit analysis of leaning against the wind,” Journal of Monetary Economics, 90(C), 193213.

  • Taylor, J. B. (1993): “Discretion versus Policy Rules in Practice,” in Carnegie-Rochester Conference Series on Public Policy, Vol. 39, pp. 195214.

    • Search Google Scholar
    • Export Citation
  • Watson, M. W. (1993): “Measures of Fit for Calibrated Models,” Journal of Political Economy, 101 (6), 10111041.

  • Woodford, M. (2010): “Financial Intermediation and Macroeconomic Analysis,” Journal of Economic Perspectives, 24 (4), 2144.

  • Woodford, M. (2012): “Inflation Targeting and Financial Stability,” NBER Working Papers 17967, National Bureau of Economic Research, Inc.

    • Search Google Scholar
    • Export Citation
  • Woodford, M. (2019): “Monetary Policy Analysis when Planning Horizons are Finite,” NBER Macroeconomics Annual, 33 (1), 150.

Auxiliary Appendices

Appendix A. The Analytics of the Conditional Mean-Volatility Trade-off

This section shows that a second order perturbation approximation of a DSGE model solution is capable, at least in principle, of generating a non-trivial relationship between the conditional mean and conditional variance of its variables. To fix attention, we consider a simple model with two variables, ygap and π, which we’ll jointly denote as y(ygap,π), approximated around some point yss = (ygap,ss, πssss) and driven by a vector of N.i.d. shocks εt. In what follows we shall analyze the conditional distribution P (yt+1|Ft), where Ft = σ (εt) is the filtration generated by εt.

A.1. The linear model. In this case, the first-order approximation to the policy function equals

y t + 1 = y s s + A ( y t y s s ) + B ε t + 1 .

It is immediately clear that only the mean of the conditional distribution can vary over time. Specifically

P ( y t + 1 | F t ) = N ( y s s + A ( y t y s s ) , B Σ ε ) .

Accordingly, the conditional variance of yt+1 equals ε and so is independent of the underlying state yt37.

A.2. Second-order approximation. In this case, the policy function is

y t + 1 = y s s + 1 2 g σ σ + A ( y t y s s ) + B ε t + 1 + C ( y t y s s ) 2 + D ( y t y s s ) ε t + 1 + E ε t + 1 2 .

Clearly, the conditional distribution will no longer be normal, because of the final term (that is, Eεt+12 which is χ2(1)). The resulting conditional moments are

μ 2 n d = y s s + 1 2 g σ σ + A ( y t y s s ) + C ( y t y s s ) 2 + E Σ ε , 2

and

( σ 2 n d ) 2 = E ( y t + 1 ( y s s + 1 2 g σ σ + A ( y t y s s ) + C ( y t y s s ) 2 + E Σ ε , 2 ) ) 2 = E ( ( B + D ( y t y s s ) ) ε t + 1 + E ( ε t + 1 2 Σ ε , 2 ) ) 2 = ( B + D ( y t y s s ) ) 2 Σ ε , 2 + E 2 ( Σ ε , 4 4 ! ! Σ ε , 4 ) = ( B + D ( y t y s s ) ) 2 Σ ε , 2 + E 2 Σ ε , 4 ( 3 1 )

So here it becomes crucial whether D is zero or not. With D = 0, the conditional variance of yt is constant and equal to some function of shock moments (up to order 4), that is

D = 0 ( σ 2 n d ) 2 = B 2 Σ ε , 2 + 2 E 2 Σ ε , 4

and so we would have no chance to witness changes in simulated conditional variances.

Via a similar arithmetic as above

s k e w = E ( ( B + D ( y t y s s ) ) ε t + 1 + E ( ε t + 1 2 Σ , 2 ) ) 3

so the skew will be a mixture of normal and χ2-distributed variables. One can show that all higher moments will be time/state invariant unless D ≠ 0, which, however, is a condition which clearly holds for the NKV model.

Appendix B. Derivation of the Risk-Augmented IS Curve

In this Appendix we show how to derive Equation 3 from Equation 4. To that effect, we first write 3 as

1 1 + i ˜ + s p r ˜ t = E t [ β ξ t + 1 ξ t ( C t + 1 C t ) σ 1 Π t + 1 ] exp ( log ( 1 + i ˜ t + s p r ˜ t ) ) = ( e x p   l o g β ) E t [ exp ( log ξ t + 1 ξ t ) exp ( σ log ( C t + 1 C t ) ) exp ( log 1 Π t + 1 ) ] 1 = ( e x p   l o g β ) E t [ exp ( log ( 1 + i ˜ t + s p r ˜ t ) + log ( ξ t + 1 ξ t ) σ log ( C t + 1 C t ) log ( Π t + 1 ) ) ] 1 = E t [ exp ( log ( 1 + i ˜ t + s p r ˜ t ) ρ + g ˜ t + 2 σ Δ c ˜ t + 1 π ˜ t + 1 ) ] ( 18 )

where

Δ c ˜ t + 1 log ( c t + 1 c t ) π ˜ t + 1 log ( Π t + 1 ) g ˜ t + 1 log ( ξ t + 1 ξ t ) p log β .

Linearizing the right-hand side of Equation (18) around steady-state gives

1 E t [ 1 ρ + i t + s p r t + g t + 1 σ Δ c t + 1 π t + 1 |

where all the variables without tildes are given by the difference between their tilde counterpart and the steady state value. Re-arranging the last equation gives

c t = E t c t + 1 + 1 σ ρ 1 σ ( i t π t + 1 ) 1 σ s p r t 1 σ E t g t + 1

Next we write the Euler equation in terms of the output gap ytgapytytnwhereytn is the natural level of output. Using the market clearing condition ct = yt and adding and subtracting Et[yt+1n]ytn, we get

y t g a p = E t y t + 1 g a p 1 σ ( i t π t + 1 r t n ) 1 σ s p r t 1 σ E t g t + 1

where

r t n = ρ + σ ( E t y t + 1 n y t n )

is the natural rate.

Appendix C. Derivation of the Law of Motion for Financial Conditions

We present two alternative ways of deriving the equation linking financial conditions with other endogenous variables. The first is rooted in the diagnostic expectations approach proposed and successfully applied in Bordalo, Gennaioli, and Shleifer (2018). The second inverts and simplifies relationships at the core of a standard financial accelerator model Bernanke, Gertler, and Gilchrist (1999). They are discussed in turn below.

C.1. The diagnostic expectations approach. Under diagnostic expectations indexed by θ, Bordalo, Gennaioli, and Shleifer (2018) show that conditional expectations for some random variable x can be computed by

E t θ [ x t + 1 ] = E t [ x t + 1 ] + θ [ E t [ x t + 1 ] E t 1 [ x t + 1 ] ] ( 19 )

where Etθ[] is the expectations operator under diagnostic expectations, Et [·] is the rational expectations operator and θ ≥ 0 indexes the severity of judging by representativeness. Households take θ as given. We assume the fundamentals of the economy, εt, follow the AR(1) process

ε t + 1 = ρ ε t + υ t + 1 ( 20 )

where 0 < p < 1 and Vt is a sequence of i.i.d. normal random variables. This is the assumption in the standard analysis of, for example, Clarida, Gali, and Gertler (1999), and it is also the benchmark assumption in Bordalo, Gennaioli, and Shleifer (2018). Equation 5 is

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) 1 σ s p r t 1 σ V ( X t 1 ) ε t y g a p

or, since the last term captures the distortion in expectations due to diagnostic expectations,

y t g a p = E t θ y t + 1 g a p 1 σ E t θ ( i t π t + 1 ) 1 σ s p r t

Combining the last two equations, we get

1 σ V ( X t 1 ) ε t y g a p = ( E t y t + 1 g a p E t θ y t + 1 g a p ) + 1 σ ( E t π t + 1 E t θ π t + 1 ) ( 21 )

Under rational expectations, the solution (see, for example, Clarida, Gali, and Gertler, 1999) is

y t g a p = α g a p ε t ( 22 )
π t = α t π ϵ t ( 23 )

for some constants αgap, απ. Using equations 20 and 19 in 21 we get

1 σ V ( X t 1 ) ε t y g a p = ( E t θ y t + 1 g a p E t y t + 1 g a p ) + 1 σ ( E t θ ( π t + 1 ) E t ( π t + 1 ) ) = ( θ [ E t [ y t + 1 g a p ] E t 1 [ y t + 1 g a p ] ] ) + 1 σ ( θ [ E t [ π t + 1 ] E t 1 [ π t + 1 ] ] ) = θ { E t [ y t + 1 g a p ] E t 1 [ y t + 1 g a p ] + 1 σ ( [ E t [ π t + 1 ] E t 1 [ π t + 1 ] ] ) } = θ { E t [ α g a p ε t + 1 ] E t 1 [ α g a p ε t + 1 ] + 1 σ ( [ E t [ α π ε t + 1 ] E t 1 [ α π ε t + 1 ] ] ) } = θ { α g a p E t [ ε t + 1 ] α g a p E t 1 [ ε t + 1 ] + 1 σ α π ( [ E t [ ε t + 1 ] E t 1 [ ε t + 1 ] ] ) } = θ { α g a p E t [ ρ ε t + υ t + 1 ] α g a p E t 1 [ ρ ε t + υ t + 1 ] + 1 σ α π ( [ E t [ ρ ε t + υ t + 1 ] E t 1 [ ρ ε t + υ t + 1 ] ] ) } = θ { α g a p E t [ ρ ε t + υ t + 1 | α g a p E t 1 [ ρ 2 ε t + 1 + ρ υ t + υ t + 1 ] + 1 σ α π ( [ E t [ ε t + 1 + υ t + 1 ] E t 1 [ ρ 2 ε t + 1 + ρ υ t + υ t + 1 ] ) } = ρ [ α g a p + 1 σ α π ] θ ( ε t ρ ε t 1 ) = ρ [ α g a p + 1 σ α π ] θ υ t

so we can identify 1σεtygapwithρ[αgap+1σαπ]υt.

Bordalo, Gennaioli, and Shleifer (2018) show in equation (10) of Proposition 3 that the law of motion for the average credit spread St, follows an ARMA(1,1) process

S t = ( 1 b ) σ 0 + b S t 1 ( 1 + θ ) b σ 1 ε t + θ b 2 σ 1 ε t 1 .

Letting

s p r t S t σ 0

and equating financial conditions with spreads times some constant scaling factor, we arrive at

s p r t = b s p r t 1 ( 1 + θ ) b σ 1 ε t + θ b 2 σ 1 ε t 1 η t = b η t 1 ( 1 + θ ) b σ 1 σ γ η ε t + θ b 2 σ 1 σ γ η ε t 1

which can be rearranged as

ε t σ γ η ( 1 + θ ) b σ 1 η t + σ γ η ( 1 + θ ) σ 1 η t 1 + θ b ( 1 + θ ) ε t 1

or, more parsimoniously,

ε t ϱ η t + H η t 1 + ν ε t 1

where

ϱ σ γ η ( 1 + θ ) b σ 1 H σ γ η ( 1 + θ ) σ 1 ν θ b ( 1 + θ ) .

Since this equation holds for any t we can therefore recursively substitute to obtain an infinite-AR representation of the financial condition process,

ϱ η t = ϰ η t 1 + ε t + ν ε t 1 = ϰ η t 1 + ε t + ν ( ϱ η t 1 + ϰ η t 2 + ν ε t 2 ) ε t ( ϰ + ν ϱ ) η t 1 ν ϰ η t 2 ν 2 ( ϱ η t 2 + ϰ η t 3 + ν ε t 3 ) ε t ( ϰ + ν ϱ ) Σ i = 0 + ν i η t 1 i

where limi→+∞ νi = 0 requires

| ν | < 1 | b | < | 1 + 1 θ | ( 24 )

which is automatically satisfied whenever θ ≥ 0.38

In summary, under the convergence criteria in Equation 24, we can further simplify to obtain

η t = ( ϰ ϱ + ν ) i = 0 + ν i η t 1 i + 1 ϱ ε t = b 1 + θ i = 0 + ( θ b 1 + θ ) i η t 1 i ( 1 + θ ) b σ 1 σ γ η ε t .

Truncating at the second lag we arrive at the approximate law of motion,39

η t u 1 η t 1 + u 2 η t 2 ( 1 + θ ) b σ 1 ε t .

Since we have already established that

y t g a p = E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t ε t

where εt1σEtgt+1ξ, therefore

ε t = y t g a p + E t y t + 1 g a p 1 σ E t ( i t π t + 1 ) γ η η t

and the truncated process for financial conditions becomes40

η t u 1 η t 1 + u 2 η t 2 + ( 1 + θ ) b σ 1 ( y t g a p E t y t + 1 g a p + γ η η t )

After collecting terms we thus end up with

η t λ η η t 1 + λ η η η t 2 θ y y t g a p + θ η E t y t + 1 g a p ( 25 )

where the coefficients in Equation 25 are given by,

λ η u 1 1 ( 1 + θ ) b σ 1 γ η λ η η u 2 1 ( 1 + θ ) b σ 1 γ η θ y ( 1 + θ ) b σ 1 1 + ( 1 + θ ) γ η θ η ( 1 + θ ) b σ 1 1 + ( 1 + θ ) γ η

with σ1 < 0 ⇒ θy > 0, θη > 0 as in our specification. This completes the derivation.

C.2. The classical financial accelerator approach. We start with two relationships at the heart of the financial accelerator model (Equations 4.17 and 4.24 in Bernanke, Gertler, and Gilchrist (1999), with notation and timing adjusted to coincide with those in our paper):

s p r t = χ ( n t q t k t ) n t = K N r t k ( K N 1 ) ( s p r t 1 + i t 1 π t ) + θ n t 1 .

The second equation can be rewritten as

n t = θ n t 1 + ( s p r t 1 + i t 1 ( 1 K N ) π t ) + K N ( r t k i t 1 s p r t 1 )

and further, exploiting the definition of the spread, it1Etrtksprt1 as

n t = θ n t 1 + ( s p r t 1 + i t 1 ( 1 ϱ ) π t ) + ϱ ε t r k

where

ϱ K N a n d ε t r k ( r t k E t r t k ) .

Iterating backwards we obtain

n t = j = 0 + θ j ( s p r t j 1 + i t j 1 ( 1 ϱ ) π t j ) + ϱ j = 0 + θ j ε t j r k

where we exploited limj→+∞ θjnt−j = 0. Subtracting qt + kt from both sides and multiplying by — χ we get

s p r t = χ ( n t q t k t ) = χ j = 0 + θ j ( s p r t j 1 + i t j 1 ( 1 ϱ ) π t j ) ϱ χ j = 0 + θ j ε t j r k + χ ( q t + k t )

or equivalently,

s p r t = χ ( 1 ϱ ) π t χ j = 0 + θ j ( s p r t j 1 + i t j 1 θ ( 1 ϱ ) π t j 1 ) ϱ χ j = 0 + θ j ε t j r k + χ ( q t + k t ) .

Iterating on the Phillips curve yields

π t = κ E t j = 0 + β j y t + j g a p

while the Taylor rule can be rearranged as

i t θ ( 1 ϱ ) π t = ϕ y g a p y t g a p + ( ϕ π θ ( 1 ϱ ) ) π t = ϕ y g a p y t g a p + ( ϕ π θ ( 1 ϱ ) ) κ E t j = 0 + β j y t + j g a p = ( ϕ y g a p + κ ( ϕ π θ ( 1 ρ ) ) ) y t g a p + ( ϕ π θ ( 1 ϱ ) ) κ E t j = 1 + β j y t + j g a p .

Plugging this into our equation for the dynamics of spreads we obtain

s p γ t = χ j = 0 + θ j ( s p r t j 1 + ( ϕ y g a p + κ ( ϕ π θ ( 1 ϱ ) ) ) y t j 1 g a p + ( ϕ π θ ( 1 ϱ ) ) κ . E t j 1 m = 0 + β j y t j 1 + m g a p ) χ ( 1 ϱ ) κ E t j = 0 + β j y t + j g a p ϱ χ j = 0 + θ j ε t j r k + χ ( q t k t )

or, after collecting terms and exploiting ηtsprt/(σγη),

η t = χ η t 1 χ θ η t 2 χ ( 1 ϱ ) κ σ γ η y t g a p χ ( 1 ϱ ) κ β σ γ η E t y t + 1 g a p χ j = 2 + θ j η t j 1 χ ( 1 ϱ ) κ σ γ η E t j = 2 + β j y t + j g a p ϱ χ σ γ η j = 0 + θ j ε t j r k + χ σ γ η ( q t + k t ) χ σ γ η j = 0 + θ j { ( ϕ y g a p + κ ( ϕ π θ ( 1 ϱ ) ) ) y t j 1 g a p + ( ϕ π θ ( 1 ϱ ) ) κ E t j 1 m = 1 + β j y t j 1 + m g a p }

which can be approximated as,

η t λ η η t 1 + λ η η η t 2 θ y y t g a p θ η E t y t + 1 g a p ( 26 )

where λη, ληη and denote and θy and θη are equal to, respectively χ,χθ,χ(1ϱ)κσγη and χ(1ϱ)κβσγη plus coefficients of a polynomial in [ηt1,ηt2,ytgap,Etyt+1gap] approximating the truncated terms in the second and third line of the expression above. Again, the point of truncation of these expressions has been chosen to ensure that we account for the endogenous links between ytgap, and to ensure the resulting reduced form specification for both of these processes can, at least in principle, replicate the stylized facts characterizing their dynamics.

Appendix D. Analytical Derivations of Correlation Coefficients

To avoid having to rely on finite sample approximations to ergodic moments, we now derive expressions for analytical moments in terms of policy function coefficients. These expressions are crucial for our estimation process, as otherwise, to keep inaccuracies introduced by sampling under control, we would have required long and time consuming simulation runs, effectively rendering optimization unfeasible.

Under the specification assumed in Equations (1) – (4), ηt-1 and ηt-2 are the only two state variables in the model. Assuming that a unique equilibrium exists, this implies that the reduced form for ηt and the output gap ytgap will be given by41

η t = F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p y t g a p = P 2 η t 1 + P 3 η t 2 + P 1 ε t y g a p

where the coefficients [F1, F2,F3] and [P1,P2,P3] are complicated, non-linear functions of the underlying structural parameters.

We can now characterize the laws of motion satisfied by ytgap,Etyt+1gap,dytgapandEtdyt+1gap as a function of the Fs and Ps. This is done in the following sequence of Lemma’s.

Lemma 1. In the model considered, the level of the output gap is an ARMA(2,2) process given by

y t g a p = F 2 y t 1 g a p + F 3 y t 2 g a p + P 1 ε t y g a p + ( F 1 P 2 F 2 P 1 ) ε t 1 y g a p + ( F 1 P 3 F 3 P 1 ) ε t 2 y g a p

Proof. We know that

y t 1 g a p P 2 η t 2 P 3 η t 3 P 1 ε t y = 0 y t 2 g a p P 2 η t 3 P 3 η t 4 P 1 ε t 2 y g a p = 0

and so the second equation can be equivalently rewritten as

y t g a p = κ 1 ( y t 1 g a p P 2 η t 2 P 3 η t 3 P 1 ε t 1 y g a p ) + κ 2 ( y t 2 g a p P 2 η t 3 P 3 η t 4 P 1 ε t 2 y g a p ) + P 2 η t 1 + P 3 η t 2 + P 1 ε t y g a p

where κ1 and κ2 are arbitrary constants. This can be rearranged as

y t g a p = κ 1 y t 1 g a p + κ 2 y t 2 g a p + P 1 ε t y g a p κ 1 P 1 ε t y κ 2 P 1 ε t 2 y g a p + P 2 ( η t 1 κ 1 η t 2 κ 2 η t 3 ) + P 3 ( η t 2 κ 1 η t 3 κ 2 η t 4 ) .

By setting

κ 1 = F 2 a n d κ 2 = F 3

and exploiting

i { 1 , 2 } : η t i F 2 η t i 1 F 3 η t i 2 = F 1 ε t i y g a p

this simplifies to

y t g a p = F 2 y t 1 g a p + F 3 y t 2 g a p + P 1 ε t y g a p + ( P 2 F 1 F 2 P 1 ) ε t 1 y g a p + ( P 3 F 1 F 3 P 1 ) ε t 2 y g a p

which completes the proof.□

Remark 2. Note that we have so far assumed that εtgap~N(0,1), but we could equally introduce ε˜tygap=P1εtygap~N(0,P12) and express the output gap as

y t g a p = F 2 y t 1 g a p + F 3 y t 2 g a p + ε ˜ t y g a p + ( P 2 F 1 F 2 P 1 ) P 1 ε ˜ t y g a p + ( P 3 F 1 F 3 P 1 ) P 1 ε ˜ t 2 y g a p

that is, as a standard ARMA(2,2) process in which the noise has some non-unitary variance (P12).

Lemma 3. In the model considered, the change in the output gap is an ARMA(2,3) process given by

d y t g a p = F 2 d y t 1 g a p + F 3 d y t 2 g a p + P 1 ε t y g a p + ( F 1 P 2 ( F 2 + 1 ) P 1 ) ε t y + + ( F 1 ( P 3 P 2 ) ( F 3 F 2 ) P 1 ) ε t 2 y g a p ( F 1 P 3 F 3 P 1 ) ε t 3 y g a p

Proof. Letting

y t g a p = A 1 y t 1 g a p + A 2 y t 2 g a p + A 3 ε t y g a p + A 4 ε t 1 y g a p + A 5 ε t 2 y g a p

we immediately obtain

d y t + 1 g a p = y t + 1 g a p y t g a p = ( A 1 y t g a p + A 2 y t 1 g a p + A 3 ε t y g a p + A 4 ε t 1 y g a p + A 5 ε t 2 y g a p ) ( A 1 y t 1 g a p + A 2 y t 2 g a p + A 3 ε t 1 y g a p + A 4 ε t 2 y g a p + A 5 ε t 3 y g a p ) = A 1 d y t g a p + A 2 d y t 1 g a p + A 3 ε t y g a p + ( A 4 A 3 ) ε t 1 y g a p + ( A 5 A 4 ) ε t 2 y g a p A 5 ε t 3 y g a p

Plugging in A1 = F2, A2 = F3, A3 = P1, A4 = (P2F1 — F2P1), A5 = (P3F1 — F3P1) from the previous Lemma and rearranging terms then immediately establishes the result. □

Lemma 4. In the model considered above, the conditional mean of the output gap is an ARMA(2,1) process satisfying

E t y t + 1 g a p = F 2 E t 1 y t g a p + F 3 E t 2 y t 1 g a p + P 2 F 1 ε t y g a p + P 3 F 1 ε t y

Proof. We know that ytgap=P2ηt1+P3ηt2+P1εtygap and so

E t y t + 1 g a p = ( P 2 η t + P 3 η t 1 ) = P 2 ( F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p ) + P 3 η t 1 = ( P 2 F 2 + P 3 ) η t 1 + P 2 F 3 η t 2 + P 2 F 1 ε t y g a p

which is an AR(2) in nt. Accordingly, applying the first Lemma and rearranging terms, we know that Etyt+1gap will be an ARMA(2,2) process with the following coefficients

E t y t + 1 g a p = F 2 E t 1 y t g a p + F 3 E t 2 y t 1 g a p + P 2 F 1 ε t y g a p + + ( ( P 2 F 2 + P 3 ) F 1 F 2 P 2 F 1 ) ε t 1 y g a p + ( P 2 F 3 F 1 F 3 P 2 F 1 ) ε t 2 y g a p

which after simplifying yields the ARMA(2,1) process above. □

Lemma 5. In the model considered above, the conditional mean of the change in the output gap is an ARMA(2,2) process satisfying

E t y t + 1 g a p y t g a p = F 2 E t 1 d y t g a p + F 3 E t 2 d y t 1 g a p + ( P 1 F 1 ) ε t y g a p + ( F 1 P 2 F 2 P 1 F 1 ) ε t 1 y g a p + ( F 1 P 3 F 3 P 1 ) ε t 2 y g a p

Proof. We can combine the two previous results, namely

y t g a p = A 1 y t 1 g a p + A 2 y t 2 g a p + A 3 ε t y g a p + A 4 ε t 1 y g a p + A 5 ε t 2 y g a p E t y t + 1 g a p = B 1 E t 1 y t g a p + B 2 E t 2 y t 1 g a p + B 3 ε t y g a p + B 4 ε t 1 y g a p + B 5 ε t 2 y g a p

to find, after noting that A1 = B1 = F2 and A2 = B2 = F3, that

E t y t + 1 g a p y t g a p = F 2 ( E t 1 y t g a p y t 1 g a p ) + F 3 ( E t 2 y t 1 g a p y t 2 g a p ) + ( B 3 A 3 ) ε t y g a p + ( B 4 A 4 ) ε t 1 y g a p + ( B 5 A 5 ) ε t 2 y g a p

which, after plugging in for the remaining Ai and Bi from the previous Lemmas, completes the proof. □

Having characterized the laws of motion for ytgap,Etyt+1gap,dytgapandEtdyt+1gap it will also be helpful to establish how these depend on η, as that will allow us to quickly compute their respective correlations with ηt as well as autocorrelations.

Lemma 6. If

η t = F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p y t g a p = P 2 η t 1 + P 3 η t 2 + P 1 ε t y g a p

then

E t y t + 1 g a p = ( P 2 F 2 + P 3 ) η t 1 + P 2 F 3 η t 2 + P 2 F 1 ε t y g a p d y t g a p = P 2 η t 1 + ( P 3 P 2 ) η t 2 P 3 η t 3 + P 1 ε t y g a p P 1 ε t y E t d y t + 1 g a p = ( P 2 F 2 + ( P 3 P 2 ) ) η t 1 + ( P 2 F 3 P 3 ) η t 2 + ( P 2 F 1 P 1 ) ε t y g a p

Proof. Straight from the respective definitions, we have

E t y t + 1 g a p = E t ( P 2 η t + P 3 η t 1 + P 1 ε t y g a p ) = P 2 ( F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p ) + P 3 η t 1 = ( P 2 F 2 + P 3 ) η t 1 + P 2 F 3 η t 2 + P 2 F 1 ε t y g a p

and

d y t g a p = y t g a p y t 1 g a p = P 2 η t 1 + P 3 η t 2 + P 1 ε t y g a p ( P 2 η t 2 + P 3 η t 3 + P 1 ε t y g a p ) = P 2 η t 1 + ( P 3 P 2 ) η t 2 P 3 η t 3 + P 1 ε t y g a p P 1 ε t y = P 2 η t 1 + ( P 3 P 2 ) η t 2 P 3 η t 3 + P 1 ε t y g a p P 1 ε t 1 y g a p .

Using the result above we can then write

E t d y t + 1 g a p = E t ( P 2 η t + ( P 3 P 2 ) η t 1 P 3 η t 2 + P 1 ε t + 1 y g a p P 1 ε t 1 y g a p ) = P 2 ( F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p ) + ( P 3 P 2 ) η t 1 P 3 η t 2 P 1 ε t y g a p = ( P 2 F 2 + P 3 P 2 ) η t 1 + ( P 2 F 3 P 3 ) η t 2 + ( P 2 F 1 P 1 ) ε t y g a p .

Remark 7. It then immediately follows that

c o ν ( η t , y t g a p ) = E t η t ( P 2 η t 1 + P 3 η t 2 + P 1 ε t y g a p ) = P 2 γ ( 1 ) + P 3 γ ( 2 ) + P 1 F 1 c o ν ( η t , E t y t + 1 g a p ) = E t η t ( ( P 2 F 2 + P 3 ) η t 1 + P 2 F 3 η t 2 + P 2 F 1 ε t y g a p ) = ( P 2 F 2 + P 3 ) γ ( 1 ) + P 2 F 3 γ ( 2 ) + P 2 F 1 2 c o ν ( η t , d y t g a p ) = E t η t ( P 2 η t 1 + ( P 3 P 2 ) η t 2 P 3 η t 3 + P 1 ε t y g a p P 1 ε t y g a p ) = P 2 γ ( 1 ) + ( P 3 P 2 ) γ ( 2 ) P 3 γ ( 3 ) + P 1 F 1 P 1 F 2 F 1 c o ν ( η t , E t d y t + 1 g a p ) = E t η t ( P 2 F 2 + P 3 P 2 ) η t 1 + ( P 2 F 3 P 3 ) η t 2 + ( P 2 F 1 P 1 ) ε t y g a p = ( P 2 F 2 + P 3 P 2 ) γ ( 1 ) + ( P 2 F 3 P 3 ) γ ( 2 ) + ( P 2 F 1 P 1 ) F 1

Where γ(i) is the i-th order autocovariance of ηt.

Remark 8. Of course, since

η t = F 2 η t 1 + F 3 η t 2 + F 1 ε t y g a p

therefore the autocovariances γ(1), γ(2) and γ(3) are straightforward to compute. Furthermore, we can also solve for the first three autocorrelation coefficients τ(i) ,i ∈ {1,3} directly from

τ ( i ) c o r r ( η t , η t i ) = c o ν ( η t , η t i ) υ a r ( η t ) υ a r ( η t i ) = γ ( i ) γ ( 0 )

with

τ ( 1 ) = F 2 1 F 3 τ ( 2 ) = F 3 F 2 2 F 3 1 , 48 τ ( 3 ) = F 2 3 + F 2 ( F 3 2 ) F 3 F 3 1 .

Appendix E. Auxiliary Model Validation Exercises

E.1. Simulated Model Paths: Output Gap Growth. While Table 3 confirms that the NKV model generates moments close to those in the data, it does not provide any information regarding how simulated variable paths compare to their empirical counterparts. To illustrate that, we follow the approach in Adrian, Boyarchenko, and Giannone (2019) and estimate conditional quantiles of the output gap distribution by using quantile regressions (QR) with output gap growth on the left hand side and lags of inflation, output gap growth and financial conditions on the right hand side.

The left panel of Figure 12 plots the time series of the output gap growth together with the median, 5th and 95th conditional output gap growth quantiles estimated using QR. The right panel shows the model counterparts that are obtained following the exact same procedure but using one representative simulated path instead of the data. Of course, looking at one path is not a substitute for a more formal econometric test, but it does provide an initial indication that, just as in the data, the NKV model generates a conditional 95th quantile that is almost constant and a 5th conditional quantile that is very responsive to financial conditions.42

Figure 12.
Figure 12.

Output Gap Growth Conditional Distribution and Realizations

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The 5th quantile, median, and 95th quantile of the conditional output gap growth distribution for one-quarter ahead. The conditional moments are estimated using quantile regressions featuring Δyt+1gap on the left hand side, and its lag, inflation, and financial conditions on the right hand side. Panel (a) shows the data while Panel (b) shows data simulated from the NKV model.

Even though the average levels of all simulated variables match their empirical counterparts, the NKV model appears to have some difficulty matching the relative volatilities of the conditional quantiles: the 95th quantile appears to be excessively smooth, and both the conditional mean / median and the 5th conditional quantile somewhat too volatile. We conjecture that these discrepancies reflect the unconstrained nature of the QR estimates: effectively, each conditional quantile is estimated separately, and all of them are meant to be matched by a single, structural model. Importantly, however, and in contrast to any linear Gaussian model, in which the k-th and 1 – k-th conditional quantile move symmetrically around the conditional mean, the NKV does replicate the empirical skew: the top quantile is relatively stable and the bottom one tends to be more volatile.

E.2. Simulated Model Paths: Inflation. To further validate the model we now verify whether the empirical dynamics of inflation and its conditional moments is matched by the NKV. In particular, it seems interesting to investigate whether the stability of the upper quantiles of output gap growth, and its tight links with financial conditions, extends to other variables. Since inflation is tied to the output gap via the Phillips curve relationship (Equation 11) it appears a natural candidate. Panel a of Figure 13 shows that this is not the case in the data. Reassuringly, Panel b of the same Figure confirms that our non-linear amplification mechanism doesn’t counterfactually imply conditional quantile stability in simulated inflation either.

Figure 13.
Figure 13.

Inflation Conditional Distribution and Realizations

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The 5th quantile, median, and 95th quantile of the conditional inflation distribution. The series are estimated using quantile regressions with one-quarter-ahead inflation on the left hand side, and current output gap, inflation, and financial conditions on the right hand side. Panel (a) shows the data while Panel (b) shows data simulated from the NKV model.Because inflation is zero in the model’s deterministic steady state we have demeaned the data to make the two panels more directly comparable.

We have also verified that while financial conditions are “highly significant” in forecasting the shape of the conditional output gap distribution, they do not forecast the tails of the inflation distribution in a statistically significant manner (with the FCI coefficient lacking significance at the 50 percent level). In fact, conditional heteroskedasticity of inflation is well described by the level of past inflation itself, with the co-movement pattern in Figure 14 very different from the one in Figure 1. The NKV model captures these stylized facts qualitatively, which can be inferred from Figure 14, showing that it replicates the positive slope of the relationship between inflation’s conditional median and volatility.

Figure 14.
Figure 14.

Inflation Conditional Median and Volatility

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: Panel (a) shows estimates of the conditional median and conditional volatility while Panel (b) shows the conditional median and volatility simulated from the NKV model.

Notably, neither the width of the NKV-implied conditional distribution in Figure 13b nor the slope and R2 of the bivariate regression in Figure 14b are as close to their data counterparts as was the case for output gap growth. We attribute this fact to the oil price shocks of the 1970s and the arguably looser policy stance adopted in their aftermath, which our parsimonious model was not designed to account for. Accordingly, the fact that the NKV fails to generate correspondingly large or volatile inflation is largely to be expected, as is the fact that the lack of highly volatile outcomes results in a lower slope of the conditional median-volatility regression.43

E.3. Term structures of growth-at-risk. Adrian, Grinberg, Liang, and Malik (2018) estimate the term structure of growth at risk, as measured by the evolution of the 5th percentile of conditional GDP growth distribution, using local projections. The shapes of the estimated growth-at-risk term structures, sorted by the initial level of financial conditions, are consistent with endogenous risk-taking. Based on initial easy financial conditions (bottom decile of FCI), downside risks are lower than when initial financial conditions are average (middle four deciles of FCI) in the first couple of years, but downside risks increase sharply relative to the average afterwards. Conversely, when initial financial conditions are tight, likely reflecting the realization of a bad state, downside risks are very high in the near-term and then diminish over time.

Figure 15 compares the elasticities of the 5th percentile of the output gap growth distribution in the data to those implied by the NKV model. The 5th percentile of conditional output gap growth shows less downside risk in the near term when financial conditions are initially loose, and higher downside risks when initial conditions are tight. Importantly, the simulations replicate the crossing of the growth at risk, though at the same time, and in line with the results in Section 4.4, they fall somewhat short short of the empirically estimated magnitudes.44

Figure 15.
Figure 15.

Term Structures of Growth-at-Risk Cross

Citation: IMF Working Papers 2020, 236; 10.5089/9781513561066.001.A001

Note: The figure shows term structures of output-gap-at-risk, the 5th quantile of the ∆ygap distribution. The three lines condition on easy, average, and tight financial conditions (Top 10, Middle 40, Bottom 10, respectively). Panel (a) shows the empirical term structures, while Panel (b) shows the simulated term structures from the NKV model.
1

For helpful comments, the authors thank Ben Bernanke, Javier Bianchi, Olivier Blanchard, Luis Brandao-Marques, Marco Del Negro, Christopher Erceg, Gaston Gelos, Gunes Kamber, Sylvain Leduc, Jesper Lindé, Roland Meeks, Matthias Paustian, Lars Svensson, Andrea Tambalotti, Hou Wang and conference participants at the European Central Bank, Federal Reserve Bank of New York, International Monetary Fund, National Bank of Poland and ASSA 2020. Jie Yu provided outstanding research assistance. The views expressed in this paper are those of the authors and do not necessarily represent those of the International Monetary Fund or the Federal Reserve System.

1

In line with Curdia and Woodford (2010) we find that adjusting a Taylor rule for financial conditions can lead to materially better outcomes. However, and even despite an unchanged target criterion for policy (as in Curdia and Woodford, 2016), more aggressive Taylor rules in our setup may lead to financial instability.

2

More specifically, and as discussed in more detail subsequently, none of these setups robustly replicates the negative conditional mean – conditional volatility relationship, which characterizes output gap growth, and the models struggle to account for the volatility paradox and GaR term structures.

3

Watson (1993) finds that the spectral power of the RBC model is low at business cycle frequencies, with Cogley and Nason (1995) showing that the dynamic properties of the outputs are determined by the exogenous inputs with the model contributing almost nothing. The analysis in Chapter 3 of Gal´ı (2008) confirms that the same is true of the NK model, where the shapes of the IRFs mirror those of the exogenous shocks.

4

Of course, there may be other microfoundations that also deliver the NKV. Irrespective of the preferred interpretation, changes in policy can and do materially affect the dynamics of financial conditions in our model. Indeed, the instability associated with increasingly activist monetary policy, which we document, only arises because we account for endogenous links between real and financial variables.

5

Relatedly, Woodford (2012) characterizes optimal monetary policy in a setting with financial crises, and finds that inflation-targeting rules should be modified to explicitly consider the possibility of such crises occurring. Gertler and Kiyotaki (2015) add a banking sector featuring liquidity mismatches, and focus on the implications of bank runs.

6

Easier policy can increase net worth and relax capital constraints of banks, which may affect the supply of credit or asset prices in a procyclical way (Bernanke and Blinder, 1988; Gertler and Kiyotaki, 2010; He and Krishnamurthy, 2013). Low interest rates can lead to compressed risk premia because investors “reach for yield” on account of fixed nominal rate targets tied to their liabilities (Rajan, 2005). To achieve those targets, they may increase leverage and funding risks (Brunnermeier and Pedersen, 2009; Adrian and Shin, 2010, 2014). These risks can also manifest themselves as a deterioration in asset quality (Altunbas, Gambacorta, and Marques-Ibanez, 2010; Jimenez, Ongena, Peydro, and Saurina, 2012; Dell’Ariccia, Laeven, and Suarez, 2017).

7

Specifically, the observation that periods of low volatility and endogenous risk-taking contribute to a buildup of imbalances and future negative growth is the “volatility paradox” (Brunnermeier and Sannikov, 2014) discussed earlier, and our model’s ability to account for it forms one of the key litmus tests considered.

8

According to the argument, macroprudential policies are best suited to address financial vulnerabilities in part because the effects of monetary policy are broad and it cannot directly address high leverage and funding risks of financial intermediaries.

9

Caballero and Simsek (2019) provide another rationale for using monetary policy to lean against the wind. In their model, monetary policy affects the discount rate (not the risk premia on risky assets) of heterogenous investors (optimists and pessimists), but can act like a leverage limit (especially valuable when the policy rate is near the zero lower bound). Thus it reduces asset prices in booms, which will soften the asset price bust when the economy moves into a recession.

10

This decomposition only requires integrability and a Markovian structure. More precisely, the decomposition Et[gt+1]=Vt1ϵtygap, where Vt—1 is a predictable integrable process and εtygap is a martingale difference sequence, holds for any integrable discrete time process Et [gt+1] (see, for example Blanchet-Scalliet and Jeanblanc, 2020). With the additional assumption of a Markovian structure, we can then write Vt—1 = V(Xt-1).

11

Of particular import to our subsequent derivations is the fact that diagnostic expectations continue to satisfy the law of iterated expectations.

12

See equation (10) of Proposition 3 in Bordalo, Gennaioli, and Shleifer (2018).

13

Note that while Bordalo, Gennaioli, and Shleifer (2018) proceed under the simplifying assumption of linear utility, the assumption of risk aversion would only affect the values of σ0 and σ1 which we take as given.

14

This is a consequence of the fact that the volatility of productivity shocks, which would be expected to move the natural rate of output, is set to zero in the baseline version of our model.

15

We are exploiting the observation that the introduction of endogenous heteroskedasticity does not affect the coefficients of the policy function, which we discuss in more detail in Section 4.

16

Note that, to arrive at this specification, we have substituted out the equilibrium law of motion for ytgap.

17

Naturally, we could extend the setup by introducing n other shocks with volatilities σi2 (e.g., productivity and monetary policy shocks). Under the assumption that ϵtygap is the only heteroskedastic shock, the volatility formula generalizes to (mt|Ft1)=bm2V2(Xt1)+Σi=1nbmi2σi2, where bmi characterize how the log-SDF loads on the homoskedastic shocks.

18

We occasionally refer to ηt as the price of risk or as endogenous output gap volatility, which is only meant to reflect the fact that η effectively pins down the price of risk via Equation (9).

19

Theories of leverage cycles predict precisely that: low volatility boosts risk-taking and hence activity in the short term, but leads to the buildup of medium-term risks. This intuition is formalized in Adrian and Boyarchenko (2015), where leverage cycles are associated with the endogenous buildup of systemic risk.

20

Crucially, neither of the two solution methods restricts conditional second moments to be constant, nor do they suffer from “certainty equivalence”.

Relatedly, Appendix A shows that a second order perturbation approximation is the lowest-order approximation that can produce a negative relationship between the conditional mean and conditional volatility that we document. We also verified that going beyond second order approximations occasionally led to numerical instabilities, necessitating the use of pruning (Andreasen, Fernandez-Villaverde, and Rubio-Ramirez, 2018), but typically had a negligible impact on the negative relation between the conditional mean and volatility.

21

In particular, by setting the volatility of ϵtygap to zero, switching-off the “financial accelerator” (γη = 0) and enabling monetary and productivity shocks, we immediately recover the textbook NK model.

22

The approximate symbol is due to the fact that the discussion here ignores the endogenous components driven by the current and expected values of the output gap and so only provides a first-pass approximation to the equilibrium properties of financial conditions.

23

These were first documented in Adrian, Boyarchenko, and Giannone (2019) for output and in Adrian and Duarte (2018) for the output gap, with further details on the underlying econometric procedures contained therein.

24

The higher sensitivity of the lower quantile to underlying shocks makes it a good measure of risk, accounting for its central role in the IMF’s GaR methodology discussed in Adrian, Grinberg, Liang, and Malik (2018).

25

Practically, negative volatility would be equivalent to the shock having opposite effects on the output gap under some constellations of the states, which is something we want to explicitly rule out.

Since perturbation methods are incompatible with non-differentiabilities, like the one introduced by the max operator, therefore, when using Dynare, we use instead V(Xt)(νϱXt)2. Because for our chosen parameter values v — ϱ’Xt is seldom negative, this change doesn’t materially impact the properties of the model, which is why we use both specifications interchangeably.

26

In practice, we set the 97 conditional quantile equal to 1.246 as implied by coefficients of the bivariate regression in Figure 1, and where deviations from our target slope and intercept arise due to the adjustment which ensures that conditional volatility of output gap growth remains non-negative. For the benchmark model specification this resulted in v = -0.009 and ϱ’ = [-0.019,0.037,0.007] with the coefficients corresponding to: lagged and twice lagged values of η and the lagged value of the output gap respectively.

As in Campbell and Cochrane (1999) we also experimented with alternative functional forms for V (·) but found that, within plausible parameter ranges, there were few notable changes from adopting less parsimonious specifications which continued to generate stable (if not exactly constant) upper conditional quantiles. Accordingly, the results reported subsequently focus on the adjusted specification of V(·) in Equation 16.

27

While these specifications have been selected because they feature: i) no financial frictions, ii) frictions on the borrower side, and iii) frictions on the lender side, we experimented with a much broader range of non-linear models and found similar issues: typically, the slope of the trade-off was shock dependent and even if it ended up correctly signed and of approximately the right slope, its R2 invariably was so small that conditional quantiles in model simulations moved symmetrically around the conditional mean / median, failing to display any skew.

28

Brunnermeier and Sannikov (2016) and Chodorow-Reich (2014) point out that expansionary policy may recapitalize banks with potential implications for the broader validity of the volatility paradox. We are sympathetic to this argument and agree that it should be kept in mind when considering NKV-implied policy advice. However, since our model is estimated on a sample in which related effects were unlikely to have played a major role, we consider its ability to replicate the paradox to be a desirable feature.

29

Interestingly, the correlation between vulnerability and Moody’s Baa-Aaa spread, which is the difference between seasoned BAA and AAA corporate bond yields, only equals around -0.1 in our sample.

30

In particular, in the absence of any persistence, the NK forecast would be for all variables to immediately return to the steady state and remain there.

31

Every time the Taylor rule coefficients double, the output gap and inflation respond less to the same initial shock. In the limit, they wouldn’t respond at all – which corresponds to optimal policy and full stabilization. Notably, despite the increasing weights, the magnitude of the associated interest rate cuts doesn’t diverge to minus infinity. This occurs because the impact of the shock on current and expected future inflation, and consequently also the contemporaneous output gap, is decreasing in the the Taylor rule coefficients, so the (absolute) size of the monetary policy interventions necessary to ensure stability is not proportional to the coefficients’ magnitude.

32

When the coefficients on inflation and the output gap in the Taylor rule increase simultaneously, then the highest values for which the model remains stable equal 2.1 x [1.5,0.125] = [3.15,0.2625]. When only the inflation coefficient is increased, the highest stable combination is [2.35 x 1.5,0.125] = [3.525,0.125], while when only the output gap coefficient is increased, the highest stable combination is [1.5,1.2 x 0.125] = [1.5, 0.15]. The fact that the maximum multiplier decreases when only the output gap coefficient is scaled suggests, in line with the narrative, that (excessive) output gap stability is at the heart of the Blanchard Kahn violations. In essence, what matters for output gap stability is both the absolute magnitude of the corresponding Taylor rule coefficient and its size relative to that on inflation: when both are high, the output gap doesn’t end up “too” stable, but as the weight on π falls, the relative importance of ygap increases, its volatility falls and stable equilibria disappear.

33

We’re abstracting from heteroskedastic volatility here as it is not central to the argument.

34

Of course, in the model both financial conditions and the output gap would end up simultaneously unstable, which manifests itself as a violation of Blanchard-Kahn conditions.

35

However, the “fatness” of ∆ygap left tails is hardly affected – as made clear by Panel (b) of Figure 11.

36

The maximum values of Taylor rule coefficients for which the model remains stable are somewhat different under the extended Taylor rule, however. If only the inflation and output gap coefficients are scaled, then the multiplier is 1.96, with the corresponding coefficients on inflation, the output gap and expected financial conditions, equal to [1.96 × 1.5, 1.96 × 1.5, -0.1] = [2.94,0,245, -0.1], while if all three coefficients are simultaneously scaled, then the maximum multiplier equals 4.2, with the corresponding parameters equal to 4.2 × [1.5, 0.125, -0.1] = [6.3, 0.525, -0.42].

37

This also suggests why, more broadly, the conditional mean is likely to be more volatile than the conditional volatility: changes in the conditional mean are a first-order phenomenon, whereas changes in the conditional volatility are not.

38

To ensure that the process for spreads is stable, b ∈ [—1,1].

39

Truncating the infinite AR representation at the second lag allows the resulting simplified process to generate the overshooting of FCI observed in the data. This would not be the case if we truncated at the first lag, with the resulting process counterfactually converging monotonically back to steady state.

40

We have eliminated the term 1σEt(itπt+1) since in our setup the real rate is approximately o (εt).

41
In the body of the article these processes were denoted as:
ηt=aηηηt1+aηη1ηt2+aηεεtygapytgap=ayηηt1+ayη1ηt2+ayεεtygap.

In what follows we replace the a’s with F’s and P’s in order to cut on notation and for consistency with our Mathematica derivations, which are available upon request.

42

Note that no effort was made to ensure identical underlying shock sequences. While we could have filtered out shocks for which the simulated conditional mean series matched its empirical counterpart, we eschewed doing so as that would, in our view, give an overly positive picture of actual model fit.

43

While we omit these results to save on space, we have verified that our underlying estimates are robust to sample splits (in line with Adrian, Boyarchenko, and Giannone (2019)) and so eliminating the period of the 1970s from our sample simply eliminates the high conditional median points in Panel (a) of Figure 14, and in so doing brings the slope of the empirical trade-off and its R2 closer in line to those implied by the simulations.

44

As in the Volatility Paradox case, amplifying these responses without generating excessive unconditional volatility was something that proved challenging within the constraints imposed by the NKV model.

  • Collapse
  • Expand
Monetary and Macroprudential Policy with Endogenous Risk
Author:
Mr. Tobias Adrian
,
Fernando Duarte
,
Nellie Liang
, and
Pawel Zabczyk