Macroprudential Stress Tests: A Reduced-Form Approach to Quantifying Systemic Risk Losses
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

Contributor Notes

We present a novel approach that incorporates individual entity stress testing and losses from systemic risk effects (SE losses) into macroprudential stress testing. SE losses are measured using a reduced-form model to value financial entity assets, conditional on macroeconomic stress and the distress of other entities in the system. This valuation is made possible by a multivariate density which characterizes the asset values of the financial entities making up the system. In this paper this density is estimated using CIMDO, a statistical approach, which infers densities that are consistent with entities’ probabilities of default, which in this case are estimated using market-based data. Hence, SE losses capture the effects of interconnectedness structures that are consistent with markets’ perceptions of risk. We then show how SE losses can be decomposed into the likelihood of distress and the magnitude of losses, thereby quantifying the contribution of specific entities to systemic contagion. To illustrate the approach, we quantify SE losses due to Lehman Brothers’ default.

Abstract

We present a novel approach that incorporates individual entity stress testing and losses from systemic risk effects (SE losses) into macroprudential stress testing. SE losses are measured using a reduced-form model to value financial entity assets, conditional on macroeconomic stress and the distress of other entities in the system. This valuation is made possible by a multivariate density which characterizes the asset values of the financial entities making up the system. In this paper this density is estimated using CIMDO, a statistical approach, which infers densities that are consistent with entities’ probabilities of default, which in this case are estimated using market-based data. Hence, SE losses capture the effects of interconnectedness structures that are consistent with markets’ perceptions of risk. We then show how SE losses can be decomposed into the likelihood of distress and the magnitude of losses, thereby quantifying the contribution of specific entities to systemic contagion. To illustrate the approach, we quantify SE losses due to Lehman Brothers’ default.

I. Introduction

The global financial crisis demonstrated that relatively small initial losses in the financial system can be magnified to systemic dimensions. In response, authorities have, over the past few years, prioritized the development of tools that attempt to quantify losses due to contagion effects and due to the feedback mechanisms between the financial sector and the macroeconomy (what we will call losses from systemic risk effects, or SE losses). Systemic effects have the potential to magnify moderate exogenous shocks into substantial negative financial outcomes with large welfare effects. However, data constraints, the understanding of the intricacies of amplification mechanisms, how best to model those mechanisms, and how they might interact in complex financial systems impose significant impediments to both researchers and policymakers trying to develop stress-testing frameworks that can adequately quantify SE losses.

We propose an easily implementable and robust method that incorporates individual entity stress tests and losses from SE into macroprudential stress tests. The proposed “encompassing” method aims to support the development of macroprudential stress tests by combining the positive features of stress tests run on individual entities with an empirical approach to systemic risk measurement using publicly-available information. In the proposed framework, losses from SE are quantified using a reduced-form model that values financial entities’ losses, conditional on macroeconomic adverse scenarios and the distress of other entities in the system. 2 These losses can then be added to those estimated from microprudential stress tests in order to assess the total losses that could occur in a systemic event.

The quantification of losses from SE relies on the estimation of a multivariate density that typifies the asset values of the financial entities in a system. While it is possible to use different parametric approaches to estimate this density (e.g., t-distributions, mixed distributions, etc.), we recommend the use of the Consistent Information Multivariate Density Optimizing (CIMDO) approach (see Section VI.B and Appendix III for a description).3 Importantly, the CIMDO approach infers interconnectedness structures that are consistent with empirical probabilities of distress (PoDs). Since PoDs can be estimated with market-based or supervisory information, CIMDO can be implemented in a variety of data environments, covering a broad set of countries. When PoD estimations are done with market data, losses from SE take account of interconnectedness structures that are consistent with markets’ perceptions of risk. Capturing these structures properly quantifies the nonlinear increases in losses observed in crises.

Systemic risk amplification mechanisms are complex and change across time. These mechanisms are fueled by intricate macrofinancial loops and interconnectedness across financial entities and markets, which pave the road for loss contagion. Therefore, when modeling systemic risk, it is essential to identify such structures and how they might change at different stages of the financial cycle. Interconnectedness structures are defined by both direct interconnectedness (usually through contractual exposures across entities) and by indirect interconnectedness (usually caused by agents’ behavioral responses due to exposures to common risk factors and market price channels), which might not be apparent in normal times but can become significant in periods of financial distress, when agents’ reactions change, possibly giving rise to the nonlinear increases in magnitude and the speed of loss propagation observed during financial crises.

The applied literature has taken two main approaches to quantifying systemic risk. These are (i) the development of simulated models (network or agent-based (ABM)) that attempt to explicitly model agents’ behavioral responses that implicitly underpin the interconnectedness structures among FIs, and (ii) the estimation of empirical reduced form models that attempt to infer from market data, the interconnectedness structures that result of agents’ actions without explicitly modeling their behavior.

Simulated models have made important contributions to the analysis of contagion and have highlighted specific systemic risk channels due to agents’ behavior. These frameworks, however, require highly granular data that does not exist in many countries. Moreover, the explicit modeling of agents’ behavior makes these models complex; therefore, they frequently assess limited sets of amplification mechanisms and often become highly intricate when such sets are expanded. This may have limited their role for policy analysis Basel Committee on Banking Supervision (2015).4

Alternatively, empirical reduced-form models infer the interconnectedness structures (direct and indirect) across entities that result of agents’ actions, which get reflected on market prices. Systemic risk metrics estimated from these models are “reduced-form” meaning that although they capture the effects of agents’ behavior, they do not provide information of the specific agents’ behaviors that define channels of contagion that can lead to the materialization of systemic risk. Moreover, metrics derived from empirical models might be subjected to issues related to market data, especially when markets are illiquid. Nevertheless, these models have proven useful to quantify a variety of systemic risk measures.

However, one issue with many of these metrics is that they are usually not comparable to metrics obtained from standard stress tests; thus, empirical models to quantify systemic risk have not been embedded into macroprudential stress tests frameworks.

The proposed framework characterizes financial systems as portfolios of interconnected entities. This allows us to typify the asset values of the individual entities in the system and the association across the asset values of such entities (what we call the interconnectedness structure) in a multivariate density.5 The asset value multivariate density and an asset valuation model allow us to quantify expected losses suffered by specific entities conditional on other entities in the system falling in distress. We define these conditional losses as the losses due to SE losses.6 SE losses can be added to the losses of individual entities estimated by microprudential stress tests. As a result, after a microprudential stress test identifies, under a specific macrofinancial scenario, the set of entities that would fall into distress due to a “first round” of shocks, the proposed approach would permit an identification of the set of entities that would fall into distress after the losses due to SE are also incurred (“second round” effects).

Our approach to estimating SE losses maintains the benefits of empirical models while offering several advantages for implementation. As with other empirical approaches, our model can be estimated with publicly available data (without the need for highly detailed or granular supervisory information). When estimated with market-based information, it embeds market perceptions of direct and indirect asset value interconnectedness structures. Moreover, such structures are inferred without the need to explicitly model agents’ behavioral responses. However, in contrast with other empirical models, the multivariate distribution underpinning the model allows us to quantify the probabilities of the distress events happening and the intensity of losses under these events. The multivariate framework also permits an easy integration of nonbank financial intermediaries into the analysis of systemic risk, therefore capturing the interactions between banks, insurance companies, pension funds, investment funds, and hedge funds when quantifying systemic risk amplification losses.7

Lastly, we briefly discuss how the proposed method can be extended as proposed in Hiebert and others (2018) to calibrate countercyclical capital buffers.

The structure of the paper is as follows. Section II explores the theoretical foundations behind SE in greater detail and discusses their implementation in stress-testing frameworks. Section III formalizes the encompassing method that we propose and explains how it can be implemented. In Section IV, we analyze expected SE losses, and decompose them by their factors (likelihood of systemic effect and magnitude of SE losses). Section V presents the asset valuation and CIMDO methods that we use to quantify SE losses. The results of an application to the U.S. financial system at the time when Lehman Brothers defaulted are presented in Section VI. A brief discussion on the implications of the proposed method for the calibration of countercyclical capital buffers is presented in Section VII. Section VIII concludes the paper.

II. Systemic Risk

Systemic effects are diverse and complex, and can vary in structure and magnitude at different points in time. Theoretically, SE are due to various causes. They can be due to generalized shocks that affect several entities or markets and can enter into negative feedback loops between the macroeconomic outlook and financial sector losses (Bernanke and others 1999, Kiyotaki and Moore 1997, and Adrian and Shin 2014). Systemic effects can also be caused by contagion due to direct and indirect interconnectedness across financial entities and markets. Direct interconnectedness due to contractual obligations among financial entities (Allen and Gale 2000, Freixas and others 2000, and Eisenberg and Noe 2001) can cause “falling dominos” that amplify initial losses. Indirect interconnectedness can be caused by exposures to common risk factors, by asset fire sales, especially when agents’ financial positions are bounded by collateral constraints (Bhattacharya and Gale 1987, Lorenzoni 2008) and asset sell-offs, due to information asymmetries across agents (Jacklin and Bhattacharya 1988, and Khandani and Lo 2011). Indirect interconnectedness might not be apparent during calm periods but can take great relevance in periods of high volatility. Hence, interconnectedness structures are complex and likely unstable in periods of financial distress, possibly giving rise to the nonlinear increases in magnitude and speed of loss propagation observed during financial crises.

The applied literature has taken two main approaches to modelling SE. These include the development of simulated models of the banking system and the estimation of reduced-form indices of systemic risk in empirical models applied to market and publicly available data.

Simulated models have incorporated network features to integrate SE. Simulated models (Alessandri and others 2009 and Aikman and others 2009) embed different SE. These include direct contagion through interbank loan exposures (Eisenberg and Noe 2001), the role of common exposure and fire sales externalities (Cifuentes, Ferrucci, and Shin 2005) or liquidity runs (Tressel 2010). These models have been useful to understanding specific amplification mechanisms as they allow modelers to trace the impact of a shock through the various channels. However, due to the complexity of modeling SE effects, these models usually focus on specific mechanisms and frequently omit the inclusion of simultaneous effects, which likely play a role in real crises. Data at the level of granularity required to implement many of these models does not exist in many countries or might be difficult to get for arm’s-length entities (such as the IMF). These models rely on key structural assumptions that are difficult to model or calibrate, for instance for the elasticities of asset prices to fire sales and for the bank behavioral response to shocks.8 These limitations may explain why the implementation of these models is complex and why these models have not been able to generate realistic estimates of contagion losses of the magnitude witnessed during the global financial crisis (Elsinger and others 2013). Nonetheless, important improvements have been incorporated in recent models (Cont and Schaanning 2016). Appendix IV presents a summary of the models in Eisenberg and Noe (2001) and Cifuentes and others (2005) as these models have been at the core of many simulated network models used in central banks’ macro stress test models.9

The development of simulated models has recently shifted its attention to agent-based models (ABM). These models try to explain behavioral responses among agents in the system and build on the contributions of behavioral economics in order to better explain microeconomic behavior of agents in financial markets.10 These models include a heterogeneous set of agents, as well as a topology that describes their methods of interaction within an environment. They, therefore, attempt to go further than network models by departing from mechanical behavior and incorporating the heterogeneity of agents, banks and their behavior. ABM computational complexity and data requirements increase very rapidly as more features are added to the models. However, these obstacles may matter less in the future given the increasing availability of detailed data and the progress in computer power.

Empirical models attempt to infer from market-based information the interconnectedness structures that can pave the road for contagion. Because empirical models build on co-movements observed in market prices, these models are, in principle, better at incorporating the effects of direct and indirect contagion channels; however, these models rely on the quality of market-based data. These models usually require less granularity in the data than is needed for their implementation. Combined with the availability of high frequency market data, this makes these models adequate for high frequency monitoring of systemic risk. However, because these models are reduced form, they are not useful for singling out specific amplification mechanisms. Recent academic contributions to the measurement of systemic risk include, Diebold and Yilmaz (2009), Segoviano and Goodhart (2009), Adrian and Brunnermeier (2016), and Acharya and others (2017).11 However, such metrics are usually not comparable to metrics estimated by stress tests, nor easily transformable into systemic risk amplification losses.

III. The Encompassing Method

The proposed encompassing method aims to develop an operational macroprudential stress test. It combines the positive features of both established microprudential stress tests and systemic risk, reduced-form, models.

  • The method makes use of microprudential stress tests that are already implemented (either as bottom-up or top-down) by authorities (central banks, supervisors, regulators, etc.) or IMF staff,12 with supervisory or publicly available data that focuses on fundamentals of individual entities. Thus, microprudential stress tests that vary in complexity and sophistication, depending on the financial system for which they have been developed, can be easily incorporated into our framework.

  • For the estimation of SE losses, the method relies on a reduced-form approach that incorporates market perceptions of financial systems’ interconnectedness structures. Hence, SE loss estimates embed realistic market reactions and become computationally simple and relatively light on data requirements. When high frequency market-based data are available, the proposed framework is a cost-efficient approach to implementing macroprudential stress tests.

  • Moreover, even in cases where authorities have decided to follow the path of developing alternative simulated models, estimates of SE losses produced by the proposed framework can be helpful to policymakers by improving calibrations of such models. The cost efficiency of the proposed encompassing method allows easy parallel running of frameworks that provide policymakers with enhanced insights in addition to those produced by the alternative models.

Microprudential stress tests assess the ability of an individual FI to overcome a distressed macroeconomic situation. Using a detailed description of the balance sheet of the FI under scrutiny, the microprudential stress test (MicroST) analyzes how the components of the balance sheet react under an adverse macroeconomic scenario (denoted adv), providing a valuation of the FI’s assets under the stress scenario. The stress test thus provides an estimation of the conditional expected valuation of the individual firm, under an adverse macroeconomic scenario. The value of the firm’s assets is formalized as:

VAMicroST=E(VA|adv)(1)

The adverse scenario can be defined by specific values of a vector of adverse macroeconomic outcomes (deterministic scenario), in which case adv = {A} is a singleton. Alternatively, the adverse scenario can be defined as a distribution of adverse values for the vector of macroeconomic outcomes (stochastic scenario). In the practice of stress testing, the first option is typically used. However, both definitions of adverse scenarios can be considered under the proposed framework.

It is worth noting that microprudential stress tests do not specify whether additional financial stress is (or is not) occurring in the system, beyond the financial stress that is consistent with the assumed stressed macroeconomic scenario.13

The proposed framework identifies and describes SE losses by conditioning the valuation of an institution on additional financial events. The conditioning set can be based on the information provided by a microprudential stress test or by any other type of information that would prompt the regulatory authority to explore the amplification loss induced by the realization of a given event. We formalize these ideas with the following three definitions (see also Venn diagram in Figure 1, introduced now in preparation of Section IV):

  • The microprudential stress-test loss of bank A is the difference between the value of bank A in normal times and its value under an adverse macroeconomic scenario (hatched rectangle in Figure 1):
    Lossmicro(A)=E(VA)E(VA|adv)(2)
  • The SE loss of bank A, assuming the realization of any given financial contagion event S (for instance, the default of a bank that failed the microprudential stress test), is the difference between the value of bank A under an adverse macroeconomic scenario and its value assuming the realization of a financial distress event S as well as of the adverse macroeconomic scenario (dark-circled area in Figure 1):
    LossSE(A|S)=E(VA|adv)E(VA|advS)(3)
  • The total loss under a systemic event (TS) is then the loss assuming the realization of a financial event S in the stressed macro scenario, and it is the sum of the micro stress test loss and the SE loss:
    LossTS(A|S)=Lossmicro(A)+LossSE(A|S)=E(VA)E(VA|advS)(4)

The main reasons why losses due to systemic effects are usually not captured by microprudential stress tests; hence, we expect LossTS(A|S) > Lossmicro(A)

  • Losses in MicroST do not usually capture adequately non linearities. This affects the link between macro scenarios and risk parameters (assets’ probabilities of default, loss given default parameters, etc.) and the treatment of portfolio diversification effects;

  • The contagion mechanisms across entities and markets as discussed in Section II;

  • The macro-financial nexus: when the adverse scenario is defined as a distribution, the information that specific financial entities are in default narrows the set of adverse scenarios that are consistent with these events, which is why the calculation of conditional losses is made on a narrower (and most likely more stringent) set of adverse scenarios (advS).

The implementation to a macroprudential stress test when the scenario is deterministic would follow Figure 2:

  • Given the assumptions regarding the scenario, risk parameters (probabilities of default, loss given default, and exposures at default for different assets and entities) are individually estimated for each of the FIs analyzed (see the left-hand side block of Figure 2).14 We note that under the proposed framework, the integration of banks and nonbanks is straightforward.

  • These parameters are used as inputs to estimate losses and profitability for each entity under the microprudential stress test and scenario. Note that these parameters are also used to estimate SE losses; thus, identifying the set of entities that can experience capital shortfalls after second-round effects.

Figure 1.
Figure 1.

Characterization of SE Losses in a Venn Diagram

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.Notes: The states of nature of the adverse macroeconomic scenario are represented by the hatched rectangle. The financial stress event S is represented by the dark-circled area. The total loss under a systemic event is the difference between its value in normal times and its value assuming an adverse macroeconomic scenario and the realization of the contagion event S. This is the total loss for a bank under the adverse macroeconomic scenario and the event S.
Figure 2.
Figure 2.

Encompassing Method for Macroprudential Stress Tests

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.Note: Under specific macrofinancial scenarios, the encompassing method allows analysts to quantify losses from contagion across entities (banks and nonbanks); identify if specific entities would be able to “survive” (that is, if their capital would be above the hurdle rate) the additional losses brought by SE given default of specific entities; and calculate the contribution to contagion losses from each “connecting” entity in the system, permitting the decomposition of contributions into the likelihood of the event and the intensity (amount) of induced contagion losses.

IV. Losses due to Systemic Effects: Analysis

In this section, we explore further the quantification of SE losses when the financial S event is the distress of a specific bank. Equations (3) and (4) become: 15

LossSE(Ai|Aj)=E(VAi|adv)E(VAi|Ajadv)(5)
LossTS(Ai|Aj)=Lossmicro(Ai)+LossSE(Ai|Aj)=E(VAi)E(VAi|Ajadv)(6)

We define a vulnerability index that represents the impact on the SE losses experienced by a specific entity given default of another entities. In this example, we focus on the losses experienced by the financial institution Ai:

V(Ai|Aj)=LossTS(Ai|Aj)TA(Ai)(7)

where TA(Ai) refers to the total assets of Ai. The higher this index, the stronger the impact of Aj’s default on the losses experienced by Ai. Computing this index for any financial institution provides an estimate of the vulnerability to a given contagion event for all the FIs that form a financial system.

The SE loss thus represents the overall impact of SE on a specific FI. This measure considers all the potential loops and feedback effects from a set of defaulting banks on the FI whose loss is assessed. Note that since the entire system of FIs is considered, a high SE loss of A assuming the realization of the default of B does not necessarily mean that there is a strong direct connection between A and B. The path of contagion may, for instance, include another FI strongly connected to A and B and explain the high conditional loss of A given that B defaulted. These effects are explored further in the following section.

A. Decomposing SE Losses

SE losses can be decomposed to assess the impact of shocks through various connecting links in the financial network. In addition, the decomposition helps identify the most likely contagion events and provides an assessment of the intensity of those events (the event should represent a situation or scenario in which the state—defaulting or surviving—of each FI is clearly defined). We propose such a decomposition in an example with four firms before generalizing the formula to a set of N firms.

In our example, we consider four FIs: A, B, C, and D. As before, the rectangle in Figure 3 represents the states of nature corresponding to the adverse macroeconomic scenario, and inside each circle are the states of nature in which the bank labelling the circle is defaulting. To decompose the SE loss of A assuming the default of B, we partition the area in which B defaults as follows. In each subset of the partition, each financial institution (except A, since we are assessing its loss) is either defaulting or surviving (B is always defaulting). The partition of the set of possible events when B defaults is:

{B}={BCD,BC¯D,BCD¯,BC¯D¯}(8)
Figure 3.
Figure 3.

Decomposition of Conditional Losses: Four Financial Institutions

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.

Using the law of total expectation, the SE loss of A assuming B defaults is decomposed as follows:

LossSE(A|B)=P(BCD|B)LossSE(A|BCD)+P(BC|D|B)LossSE(A|BC|D)+P(BCD¯|B)LossSE(A|BCD¯)+P(BC¯D¯|B)LossSE(A|BC¯D¯)(9)

This decomposition provides important information on the probability and intensity of losses of different defaulting sets. Under the assumption that B defaults, this decomposition highlights the factors explaining the SE loss suffered by A:

  • The probability of a specific event can be high (for instance, the probability that C and D default), and

  • The expected loss suffered by A under that event (B, C, D default together) can be high.

Note that the proposed framework can be expanded to estimate the magnitude of amplification of systemic risk conditional on the severity of financial imbalances, usually quantified as the level of leverage, mispricing of risk, liquidity and maturity mismatches; see Hiebert et al (2018). With this extension, SE losses would not only be a function of a given macrofinancial stress scenario (that describes shocks to macrofinancial variables), the magnitude of amplification of losses would also be affected by the underlying financial imbalances. That is, for a given shock, SE losses would be larger (smaller) if the severity of imbalances is larger (smaller); see Figure 4.16

Figure 4.
Figure 4.

Macrofinancial Imbalances and Systemic Risk

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.

B. Generalizing the SE Loss Formula

We provide here a generalization of the SE loss formula. For a financial network that is made of a set of N firms denoted by {A1, …, AN} , we write as Dji(k1,,kl) the event of Aj and {Ak1,…, Akl} defaulting with all the other FIs not defaulting (except the Ai, for which no assumption is made). The SE loss is:

LossSE(Ai|Aj)=E(VAi)E(VAi|Aj)=E(VAi){k1,,kl}P(1,N\{i,j})P(Dji(k1,kl)|Aj)E(VAi|Dji(k1,kl))={k1,,kl}P(1,N\{i,j})P(Dji(k1,kl)|Aj)[E(VAi)E(VAi|Dji(k1,kl))]={k1,,kl}P(1,N\{i,j})P(Dji(k1,kl)|Aj)LossSE(Ai|Dji(k1,kl))(10)

The SE loss of Ai, if Aj defaults, can thus be decomposed as the sum of the products between:

  • The probability of default of any set of FIs, given Ajs default: P(Dji(k1,,kl)|Aj)

  • The SE loss induced by the default of this set of FIs: LossSE(Ai|Dji(k1,kl))

The SE loss is the weighted average across all possible combinations of banks defaulting. Appendix II extends this formula for any conditioning event S (made of the default of k FIs and the non-default of N-k-1 FIs).

The contribution to SE losses of the different connecting entities can be assessed as follows. Given the linearity of the SE loss formula, it is straightforward to construct a measure of the contribution of the different events to the SE loss of Ai given Aj:

Co(Dji(k1,,kl)|Aj)=P(Dji(k1,,kl)|Aj)LossSE(Ai|Dji(k1,kl))LossSE(Ai|Aj)(11)
  • The index Co(Dji(k1,,kl)|Aj) is the contribution to the conditional loss experienced by the set Dji(k1,,kl) due to the default of Aj.17 As a contribution, Co is between 0 and 1. For a policymaker, this measure is particularly informative since it identifies the “connecting” entities that contribute most to the SE losses from Aj to Ai.

  • As already explained, this contribution can be large either because the connecting banks are likely to default or because the intensity of the losses experienced by Ai are large given a default of the connecting FIs. That is, Co represents the intensity of losses for the set of defaults Dji(k1,,kl), weigthed by the probability of occurance of such event.

    The intensity of losses can be assessed using the ratio:
    In(Dji(k1,kl)|Aj)=LossSE(Ai|Dji(k1,kl))LossSE(Ai|Aj)(12)
  • This is the ratio of the SE loss assuming the realization of the event Dji(k1,,kl) to the SE loss given Aj’s default. This ratio indicates the relative weight of the conditional losses induced by a given set of defaults over the SE loss induced by the default of bank Aj. It is larger the higher the losses due to the default event Dji(k1,,kl). It lies between zero and infinity.

V. Quantifying SE Losses

A key distinction of our approach is the representation of the financial system as a portfolio of financial entities. This feature allows us to define SE losses as the losses suffered by specific entities conditional on distress of other entities in the system. This quantification requires the estimation of conditional and unconditional entities’ asset expected values, which entail the use of a model for asset valuation (presented in section VI.A) and the estimation of multivariate densities (presented in section VI.B).

A. Asset Valuation Model

The asset valuation model is based on the structural approach to corporate default. The basic premise of the structural approach of Merton (1974) is that a firm’s underlying asset value evolves stochastically over time, following a log-normal process, and default is triggered by a drop in the firm’s asset value below a prespecified barrier, henceforth called the default threshold, which is modeled as a function of the firm’s leverage structure.18 For our framework, the Merton’s univariate approach is extended to a multivariate case. For presentation purposes, we develop the model for a portfolio of two FIs; however, in Appendix I, we generalize the model for portfolios of N FIs. We assume that a multivariate distribution of the asset returns of the FIs A and B, p(x,y) has already been estimated and that we know the default thresholds Xxd and Xyd for the asset returns x and y. The expected valuations are presented here without conditioning specifically on the adverse macroeconomic scenario because the formulas are valid whether this conditioning is included or not—when conditioning on the adverse macroeconomic scenario, the only required change to the formulas is to replace p(x,y) by p(x,y|adv).

Unconditional valuation

The (unconditional) expected value of the assets of firm A is given by:

E0(VA,t)=VA,t(x,y)p(x,y)dxdy(13)

VA,t(x, y) is unknown but can be calculated as the sum of the value of debt Dt and the value of equity (for which we know the initial value Eq0 as well as the growth rate –x):

VA,t(x,y)=Eqt(x,y)+Dt(x,y)(14)

The value of bank As debt (discounted appropriately given a book value DT at time T) depends on whether bank A defaults or not:

  • in the zone where bank A defaults (x>Xxd),Dt(x,y)=(RR)DTer(Tt)

  • in the zone where bank A does not default (x<Xxd),Dt(x,y)=DTer(Tt)

where r is the discount rate and RR is recovery rate (that is, one minus the loss given default).

Splitting between debt and equity, the expected value of the firm is thus:

E0(VA,t)=Eq0exp(x,y)dxdy+P(A)(RR)DTer(Tt)+P(A¯)DTer(Tt)(15)

where P(A), the (marginal) probability of default of A, can be estimated by integrating the density p(x, y) in the zone of default of A, and P(A¯)=1P(A).

Conditional valuation

Given the multivariate distribution p(x, y), we can also calculate the conditional valuation. In this case, this is the expected value of the assets of A given the default of B (denoted by the indicator function 1B):

E0(VA,t|B)=1P(B)E(VA,t1B)=1P(B)VA,t(x,y)p(x,y)I(y>Xyd)dxdy(16)

where Xyd is the threshold value (for the asset return of B) above which B is considered in default.

Splitting again the value of the firm’s assets between the value of debt and equity, today’s total assets value of A conditioning on the default of B is:

E0(VA,t|B)=1P(B)Eqt(x,y)p(x,y)I(y>Xyd)dxdy+1P(B)Dt(x,y)p(x,y)I(y>Xyd)dxdy(17)

Thus, we have:

E0(VA,t|B)=1P(B)Eq0exp(x,y)I(y>Xyd)dxdy+P(AB)P(B)(RR)DTer(Tt)+P(A¯B)P(B)DTer(Tt)(18)

As shown in Section V, the difference between the conditional and the unconditional valuation allows us to quantify the systemic risk amplification loss. In this case, the SE losses for A under event S are represented by:

LossSE(A|S)=E(VA|adv)E(VA|advS)(19)

Appendix I provides the general formulas when the event S is any combination of k FIs defaulting and N-k-1 FIs not defaulting.

B. The CIMDO Method

The main difficulty in modelling the asset valuation of FIs is the lack of data, especially for tail events. And it is precisely in distress situations—that is, when FIs implied asset values fall simultaneously to significantly low levels (represented in the multivariate density at its “tails”)—when we are most interested in assessing adequately expected losses. Usually, the probabilities of distress (PoD) of individual institutions are the only available data. Information on the dependence structure defining joint distress in the system is not observable.

Therefore, a key challenge in extending the structural valuation model to a system of firms is due to the choice of the multivariate density characterizing the systems’ implied asset values. Although Merton’s univariate approach, which relies on a Gaussian distribution, can readily be extended to consider the multivariate distribution of asset valuations in a portfolio, the normality assumption and the fixed dependence structure defining interconnectedness structures across entities in a portfolio, on which standard models rely, do not appear to be adequate model choices. Because financial assets’ returns exhibit heavier tails than would be predicted by the normal distribution and interconnectedness structures change across time, different parametric statistical models have been proposed to model the multivariate density. However, when data for calibration is constrained, as is the case in modeling systemic risk, such models are difficult to calibrate and are subject to significant model error.19

Therefore, rather than imposing convenient distributional assumptions, we make use of the CIMDO method, as an alternative route to recover multivariate distributions. The CIMDO method (Segoviano 2006, Segoviano and Goodhart 2009, and Segoviano and Espinoza 2017), based on the Kullback (1959) cross-entropy approach, reverses the process of modeling data information. Instead of assuming parametric probabilities to characterize the information contained in the data, the entropy approach uses the data information to infer values for the unknown probability density. The CIMDO method incorporates the limited observed information on the entities’ equity returns and individual PoDs to infer the (unobserved) dependence structure of the system (embedded in the multivariate density) that best captures comovement when financial entities simultaneously fall in distress.

In statistical terms, CIMDO recovers a posterior multivariate distribution—the CIMDO density—using an optimization procedure. This implies that a prior density function (calibrated as a multivariate t-distribution using FIs equity returns data) is updated with empirical information via a set of constraints. In this implementation, the empirical estimates of the PoDs of individual FIs act as the constraints, and the derived CIMDO density is the posterior density that is the closest to the prior distribution and consistent with these constraints. Figure 5 shows the process of obtaining a CIMDO density p from a prior density q, under the objective that p remains as close as q as possible, under the constraints that q is a distribution that sums to 1 (additivity constraint) and that q is consistent with the observed PoD data (marginal constraints). Appendix III presents a technical summary of the CIMDO method. Further technical details are presented in Segoviano and Espinoza (2017).

CIMDO provides important benefits relative to parametric approaches in terms of implementation feasibility and estimation robustness. CIMDO allows analysts to infer the (unobserved) dependence structure of the system (embedded in the multivariate density) that is consistent with the (observed) individual PoDs at specific times. Thus, CIMDO seems to reduce the risk of density misspecification because it recovers densities that are consistent with empirical PoD observations.20 It also infers interconnectedness structures that are updated as empirical PoDs change. Therefore, the method enables analysts to incorporate in a timely manner updates in systems’ interconnectedness structures that can reflect nonlinear increases in periods of high volatility.21

Figure 5.
Figure 5.

Consistent Information Multivariate Density (CIMDO)

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.Note: In a system made of firm x and y, the probability of distress of firm x is the probability that the asset value for x falls in a “tail zone” of distress (the interval [ Xxd, ∞[), independently of what happens to firm y. Thus, the constraint that the multivariate distribution of asset returns is consistent with an observed PoDx is a constraint on the tail of the marginal distribution. If information on the PoD of firm y is also available, a similar constraint is added to the optimization problem of finding p, the posterior distribution closest to q that is also consistent with the observed data PoDx and PoDy.

Moreover, since CIMDO can be estimated using readily available market information, it incorporates market views of risk spill-overs due to direct contagion or indirect contagion across financial entities. However, PoDs can also be estimated from supervisory information. In such cases, SRA losses would embed the impact of indirect interconnectedness via exposures to common risk factors.22 As it is the case with other empirical methods, CIMDO does not need highly detailed and granular supervisory information, which is not available in many countries, nor to arm-length institutions.

VI. Application to the U.S. Banking System

In this Section we apply the proposed framework to estimate SE losses in the U.S. financial system at the time of Lehman’s default (we assume the losses from a micro-prudential stress test are known). We present a case study on a simplified system composed of four banks: Citibank (C), Lehman Brothers (LB), Wells Fargo (WF), and Morgan Stanley (MS) during 2008, and compute the expected losses for C, WF and MS assuming a LB default. This calculation is what is needed to move from a microprudential stress-test that would show a high probability of LB defaulting to a macroprudential stress estimating the losses for the banking system were a LB default to occur. Before presenting our results, we discuss crucial aspects related to important inputs for estimation.

A. Key Features Related to Inputs

Important aspects related to inputs to estimate SE losses should be taken into consideration. We discuss below important features on the probabilities of distress of the different entities and sectors under analysis and aggregation of sectors when estimations of SE losses involve investment funds.

Probabilities of Distress

The quantification of SE losses is based on the notion of firms’ distress. Probabilities of distress can be estimated using different models and types of data (market-based and supervisory information). Hence, the framework can be easily adapted to cater to a very high degree of institutional granularity and data availability in different jurisdictions. The meaning of distress depends on the type of entity and data employed. Events of distress usually include default; however, distress events can be broader than default and comprise, among others, debt restructuring, government intervention, recapitalization, and credit agencies’ downgrades. Nevertheless, an observed common feature of these events is that financial entities’ asset values decrease significantly on the realization of distress.

Probabilities of distress for banks and insurance companies. These can be estimated using market-based information and supervisory information.

  • Market-based information: The most common models are the following:

    Merton-type. In this case distress is equivalent to default in the sense of the Merton’s model (1974), which focuses on the capability of a bank to service its debt obligations, i.e., credit risk.

    Credit Default Swap (CDS)-spreads and bond spreads. PoDs can be estimated using credit default swap spreads. In these cases, distress will be defined by the event that triggers the payment of a CDS. The no-arbitrage theorem can also be used to deduce PoDs from bond spreads since the yield of a bond that is subject to credit risk is a function of the probability of default. Espinoza and Segoviano (2011) provide a method for computing the market price of risk and convert risk neutral PoDs estimated from market-based indicators to subjective probabilities.

  • Supervisory information: PoDs can be constructed from supervisory information (that in several countries is publicly available) when market-based data are not available nor adequate. For example, in countries where subsidiaries of foreign banks operate, it might not be possible to get market-based indicators on the subsidiary (such indicators might exist for the consolidated bank, but this is not adequate). In these cases, Segoviano and Padilla (2006) show how to simulate banks’ portfolio loss distributions and generate estimates of bank’s PoDs, which indicate the probability that losses experienced by a bank portfolio would violate a supervisory-defined capital buffer.23

Probabilities of distress for investment funds. Cortes and others (2017) define the probability of distress for investment funds as the probability of events that would require funds to liquidate assets to meet redemption demands. Thus, when funds experience strong outflows, they are likely to sell their assets to meet such demands, transmitting shocks to other financial entities in a system via the direct exposure and asset liquidation channels. The authors propose a Value at Risk approach to estimate PoDs for investment funds.

Systemic risk amplification might be captured differently in SE losses depending on input data. When based on market based indicators, SE loss quantification embeds markets’ perceptions of financial systems’ interconnectedness that include indirect interconnectedness, usually caused by exposures to common risk factors and market price channels. When based on publicly available supervisory data that is non-market based, SE loss quantification might still capture indirect interconnectedness due exposures to common risk factors; however, market-price channels might be omitted.

In the application to the US banking system, we used the probability of default obtained from CDS spreads for the day before Lehman Brothers defaulted.

B. Results

To illustrate the implementation of the approach, we estimate SE losses in the U.S. banking system due to the default of Lehman Brothers.

  • Table 1 presents our estimates of the SE losses expected to impact Citi (C), Wells Fargo (WFC), and Morgan Stanley (MS) because of the default of Lehman Brothers (LB) in September 2008.24 SE losses by C, WFC and MS if Lehman were respectively around 6.7, 5.6, and 10.0 percent of each bank’s total assets.

  • Table 1 also presents the capital injections made by the US government to major US banks. To define the amount of capital injections for large banks, US authorities estimated potential losses based on proprietary frameworks, detailed supervisory information, and expert judgment. Anecdotal evidence (Geithner 2014) indicates that injection amounts for some of the large banks (e.g., Citi) went through a thorough analysis and discussion, while for other banks, injection amounts were determined as a proportion of the banks’ balance sheets (e.g., Morgan Stanley).

  • Results of the approach were consistent under different checks (see discussion below). While SE losses were of a similar order of magnitude to capital injections, SE loss estimations were larger than actual capital injections. This might be because SE losses estimates represented markets’ expectations of losses right after Lehman’s default (September 2008), weeks before recapitalization took effect (November 2008). A possible interpretation is that the U.S. government recapitalization program was effective to contain default cascades and systemic risk losses (Geithner 2014); hence, the capital injections that were actually needed were smaller than the market’s initial expectations of losses in the aftermath of Lehman’s default.

Table 1.

TARP Capital Injections and SE Losses given the Default of Lehman Brothers

article image
Note: * In millions of USD; ** in percent.Source: U.S. Department of the Treasury

Decomposition of SE losses

As explained in Section V.A, the framework allows to decompose SE losses into the probability (Pr), intensity (In) and the contribution (Co) of conditional losses due to specific defaulting sets. Table 2 shows the decomposition of SE losses expected on Citi conditional on Lehman Brothers defaulting.

The greatest contribution to SE losses on C given LB default is shown by the conditional loss of the defaulting set including a joint default of LB and MS but not WFC (48 percent). The probability of this defaulting set is approximately 28 percent and its loss intensity is 1.76. This implies that the connection between LB and MS was a significant factor to explain SE losses on C given the LB default; hence, it was relevant to follow developments on MS as LB defaulted. While the defaulting set including LB and WFC but not MS shows a larger loss intensity (2.02), the probability of occurrence of this event is around 1 percent; therefore, the contribution of this defaulting set is the lowest (3 percent) among all the defaulting sets analyzed. These results highlight the usefulness of the SE decomposition. The quantification of Co helps identify events involving the default of other banks highly interconnected to LB that can contribute significantly to the SE losses given the default of LB.

Table 2.

Decomposition of SE losses on Citibank given the Default of Lehman Brothers

article image
Note: (Pr) in probability, (Co) in percent of conditional loss; (In) index units (from 0 to infinity), Non-defaulting entities are underlined and bold.Source: Authors’ calculations

Results are consistent under alternative checks.

  • Consistency of SE loss decomposition. From Table 2, it is easy to check that Co is equal to the probability of the event, multiplied by the intensity of losses, and that the sum of Co is equal to 100 percent.

  • Increasing conditional losses as defaulting sets increase. Table 3 presents losses experienced by Citi conditioning on different subsets of entities defaulting. As the number of entities defaulting in different subsets increase, conditional losses increase.

    Table 3.

    Conditional Losses under Different Defaulting Sets, in Millions of USD

    article image
    Note: Non-defaulting entities are underlined and bold.Source: Authors’ calculations

  • Expected Asset Values. The estimation of the unconditional expected value of firms under analysis (formula 15) is presented in Table 4a. We see that these values are consistent with the book value of assets reported for these entities in Table 4.

Table 4.

Asset Values

article image

VII. Implications for the Calibration of Capital Buffers

The Basel III framework includes multiple layers of capital buffers to ensure the resilience of financial systems.25 Banks are required to meet the minimum total capital ratio of 8 percent of risk-weighted assets (RWA) always. Additionally, banks are required to maintain the following capital requirements:

  • A capital conservation buffer (CCoB). This buffer requires banks to hold at 2.5 percent of RWA on top of the minimum capital requirement outside periods of stress. The buffer, however, can be drawn down in stress periods.26

  • A countercyclical capital buffer (CCyB). This buffer aims to enhance the resilience of the financial system to systemic risks emanating from the financial cycle, while also reducing the procyclicality of bank lending. The CCyB can vary between zero and 2.5 percent of RWA and should build up extra capital in boom times to absorb potential losses in economic downturns. The CCyB is based on the prevalent state of the macro-financial environment. Ideally, authorities should increase the CCyB during a lending boom and reduce capital requirements during a contraction.

  • A systemically important bank (SIB) capital surcharge. The SIB capital surcharge was introduced to protect the system from the structural dimension of systemic risk, therefore requiring an additional buffer commensurate to a bank’s contribution to systemic risk.

The Basel Committee on Banking Supervision (BCBS) has proposed indicator-based approaches along with judgment to calibrate capital buffers.

  • CCyB. The BCBS provides a reference guide based on the aggregate private sector credit-to-GDP gap (Basel Committee on Banking Supervision, 2010).27

  • SIB capital surcharges. The BCBS has published a methodology for assessing and identifying globally systemically important banks, G-SIBs, (Basel Committee on Banking Supervision, 2013) and proposed a similar framework for domestically systemically important banks (Basel Committee on Banking Supervision, 2012). The identification of SIBs uses indicators that capture four dimensions of systemic importance: size, interconnectedness, level of substitutability, and complexity. For G-SIBs, there is a fifth indicator: global scope of activities. Banks are ranked by their systemic importance based on the indicators and supervisory judgment and placed in five buckets with a gradual scale of surcharge ranging from 1 to 3.5 percent.

As indicated above, the CCyB and SIB are intended to protect banks against systemic risk; hence, macroprudential stress tests represent a useful tool that can be used to calibrate these buffers.

  • In the United Kingdom, authorities intend to set capital requirements for the system wide CCyB and CCoB, as well as for the bank-specific Prudential Regulatory Authority (PRA) buffer based in part on stress test results (Bank of England 2015). The specific sizes of the CCoB and the CCyB are set by the Financial Policy Committee and the size of the PRA buffer is set by the PRA, both of which are within the BoE.

  • In the United States, one idea is to introduce a bank-specific stress capital buffer (SCB) that can replace the 2.5 percent CCoB of the Basel III framework. The SCB would be set at least as high as the CCoB and would be equivalent to the maximum decline of a bank’s tier 1 capital ratio under a severe adverse scenario (Tarullo, 2016).

The difficulty, however, is in estimating SE losses. Hence, estimates from the proposed framework can be useful tool to calibrate capital buffers. An example is the calibration of the CCyB. As described in Section VI, Hiebert and others (2018) augment the framework to quantify the magnitude of SE amplification as a function of the severity of financial imbalances. The authors map levels of financial imbalances to different stages of financial cycles. This mapping allows to estimate SE losses conditional on large financial imbalances observed at cycle peaks. This quantification would prove useful to ensure that CCyB are set in a manner to allow banks to withstand such losses; see Figure 6.

Figure 6.
Figure 6.

Calibration of CCyB

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Source: Authors.

VIII. Conclusion

Authorities have, over the past few years, prioritized the development of tests that attempt to quantify losses from SE. However, the modeling of losses from SE remains challenging. Amplification mechanisms are diverse and complex and can vary in structure and magnitude at different points in time. Data are usually both scarce and deficient, and models constrained by data availability are often subject to model error. Given the complexity of modeling and implementing stress tests that capture SE, we propose a framework aimed at integrating diverse types of data and approaches to maximize the information content of heterogeneous data sources and minimize potential model error.

The encompassing method is a pragmatic way to develop robust and implementable macroprudential stress tests. These stress tests should permit the quantification of SE losses, even when highly granular data to model amplification mechanisms is not available. Specifically, the proposed framework offers important benefits:

  • It combines the use of microprudential stress tests that are already implemented with the proposed reduced-form approach to estimate SE losses; therefore, it is possible to leverage on models and expertise already existing in many countries.

  • SE loss estimation can be performed with publicly available data. This is of high relevance given the data limitations faced by the IMF and some authorities. Moreover, when based on market based indicators, SE loss quantification embeds markets’ perceptions of financial systems’ interconnectedness which includes indirect interconnectedness, usually caused by exposures to common risk factors and market price channels. These estimates incorporate non-linear changes in interconnectedness structures consistent with market perceptions across different stages of macrofinancial cycles.

  • The framework is reduced-form. It does not require one to explicitly model agents’ behavioral reactions that are difficult to incorporate in a comprehensive way and complex to properly calibrate.

  • It is a stochastic framework. This permits an estimation of the firms’ asset values conditional on different states of nature, including specific valuations of other firms in the financial system. The stochastic nature of the model allows us to quantify the probabilities of these events happening and the intensity of losses under these events.

  • The multivariate dimension facilitates the integration of nonbank financial intermediaries into the analysis of systemic risk; thus, interactions between banks and nonbanks can be considered when quantifying systemic risk amplification losses.

  • It is cost-efficient and robust. The model is simple and relatively light on data requirements.

To conclude, we mention two extensions of the framework that can be of significant value to policymakers.

First, in Section VI, we note that Hiebert and others (2018) propose to augment the framework to estimate the magnitude of amplification of systemic risk conditional on the severity of financial imbalances. That is, for a given shock, SE losses would be larger (smaller) if the severity of imbalances is larger (smaller); we briefly discuss in Section VII how this extension can be useful to calibrate the CCyB.

Second, Espinoza and others (2018) combine a general equilibrium model with the reduced form approach presented in this paper to develop a systemic risk framework, which incorporates systemic risk endogeneity and amplification mechanisms through macroeconomic and systemic risk interactions. The authors use measurements of SE losses to calibrate the parameters of the theoretical model that incorporates various systemic risk amplification channels, including interbank lending, common asset exposures and a “Minsky effect.” Such calibrations appear to be very useful to incorporate the non-linear effects (e.g., decrease in prices, increase in probabilities of distress) and changes in behavioral assumptions, especially in times of distress, that can lead to systemic risk materializing. Importantly, the proposed framework is easily implementable with data that is publicly available in numerous jurisdictions.

References

  • Acharya, V. V., L. H. Pedersen, T. Philippon, and M. P. Richardson. 2017. “Measuring Systemic Risk.Review of Financial Studies 30 (1): 247.

    • Search Google Scholar
    • Export Citation
  • Adrian, T., and M. K. Brunnermeier. 2016. “CoVaR.The American Economic Review 106 (7): 170541.

  • Adrian, T., and M. K. Brunnermeier, and H. S. Shin. 2014. “Procyclical Leverage and Value-at-Risk.Review of Financial Studies 27 (2): 373403.

    • Search Google Scholar
    • Export Citation
  • Aikman, David, Piergiorgio Alessandri, Bruno Eklund, Prasanna Gai, Sujit Kapadia, Elizabeth Martin, Nada Mora, Gabriel Sterne, and Matthew Willison. 2009. “Funding Liquidity Risk in a Quantitative Model of Systemic Stability.Working Papers, n. 555, Central Bank of Chile, Santiago, Chile.

    • Search Google Scholar
    • Export Citation
  • Alessandri, Piergiorgio, P. Gai, S. Kapadia, N. Mora, and C. Puhr. 2009. “A Framework for Quantifying Systemic Stability.International Journal of Central Banking 5 (3): 4781.

    • Search Google Scholar
    • Export Citation
  • Allen, F., and D. Gale. 2000. “Financial Contagion.Journal of Political Economy 108 (1): 133.

  • Anderson, R., Baba, C., Danielsson, J., Das, U., Kang, H., and M. Segoviano. 2017. (forthcoming) “Macroprudential Stress Test and Policies: In Search of an Implementable and Robust Framework.IMF Working Paper.

    • Search Google Scholar
    • Export Citation
  • Bernanke, B. 2011. “Implementing a Macroprudential Approach to Supervision and Regulation.Speech given macroprudential approach to supervision and regulation: A speech at the 47th Annual Conference on Bank Structure and Competition, Chicago, Illinois, May 5. https://www.federalreserve.gov/newsevents/speech/bernanke20110505a.htm, 2011.

    • Search Google Scholar
    • Export Citation
  • Bernanke, M. Gertler, and S. Gilchrist. 1999. “The Financial Accelerator in a Quantitative Business Cycle Framework.” In Handbook of Macroeconomics, vol. 1C, edited by John B. Taylor and Michael Woodford, 13411. Amsterdam: Elsevier.

    • Search Google Scholar
    • Export Citation
  • Bhattacharya, S., and D. Gale. 1987. “Preference Shocks, Liquidity, and Central Bank Policy.” In New Approaches to Monetary Economics: Proceedings of the Second International Symposium in Economic Theory and Econometrics, edited by W. Barnett and K. Singleton, chapter 4. Number 2 in International Symposia in Economic Theory and Econometrics series. Cambridge, U.K.: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Bisias, D., M. Flood, A. W. Lo, and S. Valavanis. 2012. “A Survey of Systemic Risk Analytics.Annual Review of Financial Economics 4 (1): 25596.

    • Search Google Scholar
    • Export Citation
  • Brunnermeier, M. K., and L. H. Pedersen. 2009. “Market Liquidity and Funding Liquidity.Review of Financial Studies 22 (6): 220138.

    • Search Google Scholar
    • Export Citation
  • Chan-Lau, J. A., M. Espinosa, K. Giesecke, and J. A. Solé. 2009. “Assessing the Systemic Implications of Financial Linkages.” In Global Financial Stability Report: Responding to the Financial Crisis and Measuring Systemic Risks, April 2009, chapter 2. Washington, DC: International Monetary Fund.

    • Search Google Scholar
    • Export Citation
  • Cifuentes, R., G. Ferrucci, and H. S. Shin. 2005. “Liquidity Risk and Contagion.Journal of the European Economic Association 3 (2–3): 55666.

    • Search Google Scholar
    • Export Citation
  • Cont, R. and E. Schaaning. 2016. “Fire Sales, Indirect Contagion and Systemic Stress-Testing.” Unpublished, Imperial College London, London, U.K.

    • Search Google Scholar
    • Export Citation
  • Cortes, Fabio., Peter Lindner, Sheheryar Malik, and Miguel Angel Segoviano. 2018. “A Comprehensive Multi-Sector Framework for Surveillance of Systemic Risk and Interconnectedness (SyRIN)IMF Working Paper 18/14.

    • Search Google Scholar
    • Export Citation
  • Crosbie, P. and J. Bohn. 2003. “Modeling Default Risk.Moody’s KMV, New York, New York.

  • Dent K., B. Westwood, and M. Segoviano, 2016. “Stress Testing of Banks: An Introduction.BoE Quarterly Bulletin. London: Bank of England.

    • Search Google Scholar
    • Export Citation
  • Diebold, F. X., T. A. Gunther, and A. S. Tay. 1998. “Evaluating Density Forecasts with Applications to Financial Risk Management.International Economic Review 39 (4):86383.

    • Search Google Scholar
    • Export Citation
  • Diebold, F. X., and K. Yilmaz. 2009. “Measuring Financial Asset Return and Volatility Spillovers, with Application to Global Equity Markets.The Economic Journal 119 (534): 15871.

    • Search Google Scholar
    • Export Citation
  • Demekas, D., 2015. “Designing Effective Macroprudential Stress Tests: Progress So Far and the Way Forward.IMF Working Paper 15/146. Washington DC: International Monetary Fund.

    • Search Google Scholar
    • Export Citation
  • Dent, Kieran, Ben Westwood, and Miguel Segoviano. 2016. “Stress testing of banks: an introduction.Bank of England Quarterly Bulletin 56 (3).

    • Search Google Scholar
    • Export Citation
  • Eisenberg, Larry, and Thomas H. Noe. 2001. “Systemic Risk in Financial Systems.Management Science 47 (2): 23649.

  • Elsinger, Helmut, Alfred Lehar, and Martin Summer. 2013. “Network Models and Systemic Risk Assessment.” In Handbook on Systemic Risk, vol. 1, edited by Jean-Pierre Fouque and Joseph A. Langsam, 287305. Cambridge, U.K.: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Espinoza, Raphael, Miguel A. Segoviano. 2011. “Probabilities of Default and the Market Price of Risk in a Distressed Economy,IMF Working Paper, n. WP/11/75

    • Search Google Scholar
    • Export Citation
  • Espinoza, Raphael, Miguel A. Segoviano, Miguel A. Segoviano and Ji Yan. 2018. “Systemic Risk Endogeneity: Economic Theory and Reduced-Form Measurement, together for betterIMF Working Paper, forthcoming.

    • Search Google Scholar
    • Export Citation
  • Freixas, X., B. M. Parigi, and J. C. Rochet. 2000. “Systemic Risk, Interbank Relations, and Liquidity Provision by the Central Bank.Journal of Money, Credit and Banking, 32(3): 61138.

    • Search Google Scholar
    • Export Citation
  • Gauthier, C., A. Lehar, and M. Souissi, 2012. “Macroprudential capital requirements and systemic risk.Journal of Financial Intermediation, 21(4), 594618.

    • Search Google Scholar
    • Export Citation
  • Geithner, T. 2014. “Reflections on financial Crises.Broadway Books, New York.

  • Hiebert, Paul, Schueler, Yves, Segoviano, Miguel, Zhao, Yunhui, (2018) “Systemic Risk Amplification Magnitude: Conditioning on Financial Imbalances”, forthcoming Discussion Paper, Systemic Risk Centre, London School of Economics.

    • Search Google Scholar
    • Export Citation
  • International Monetary Fund (IMF). 2014. “Review of the Financial Sector Assessment Program: Further Adaptation to the Post Crisis Era.International Monetary Fund, Washington, DC.

    • Search Google Scholar
    • Export Citation
  • International Monetary Fund (IMF), 2016. “The IMF Framework for Banks’ Balance Sheet Stress Test.Guidance Notes FS Division, Monetary and Capital Markets Department.

    • Search Google Scholar
    • Export Citation
  • Jacklin, C. J., and S. Bhattacharya. 1988. “Distinguishing Panics and Information-Based Bank Runs: Welfare and Policy Implications.The Journal of Political Economy, 568592.

    • Search Google Scholar
    • Export Citation
  • Kaszowska, J., and J. L. Santos. 2014. “The Role of Risk Perception in the Systemic Risk Generation and Amplification: Agent-Based Approach.ACRN Journal of Finance and Risk Perspectives, 3(4): 14670.

    • Search Google Scholar
    • Export Citation
  • Kaszowska, J., and J. L. Santos, and J. L. Santos. 2014. “The Role of Risk Perception in the Systemic Risk Generation and Amplification: Agent-Based Approach.

    • Search Google Scholar
    • Export Citation
  • Khandani, A. E., and A. W. Lo. 2011. “What Happened to the Quants in August 2007? Evidence from Factors and Transactions Data.Journal of Financial Markets, 14(1), 146.

    • Search Google Scholar
    • Export Citation
  • Kiyotaki, N., and J. Moore. 1997. “Credit Chains.Journal of Political Economy 105 (21): 21148.

  • Koyluoglu, H. U., Wilson, T., and M. Yague. 2003. “The Eternal Challenge of Understanding Imperfections.Mercer Oliver Wyman.

  • Krishnamurthy, A. 2010. “Amplification Mechanisms in Liquidity Crises.American Economic Journal: Macroeconomics, 2(3), 130.

  • Kullback, S. 1959. Information and Statistics. New York: J. Wiley.

  • Lorenzoni, G. 2008. “Inefficient Credit Booms.Review of Economic Studies 75 (3): 80933.

  • Merton, R. C. 1974. “On the Pricing of Corporate Debt: The Risk Structure of Interest Rates.The Journal of Finance 29 (2): 44970.

    • Search Google Scholar
    • Export Citation
  • Segoviano, M. A. 2006. “Consistent Information Multivariate Density Optimizing Methodology.London School of Economics, London, U.K.

    • Search Google Scholar
    • Export Citation
  • Segoviano, M. A., and R. Espinoza. 2017. “Consistent Measures of Systemic Risk.LSE Systemic Risk Centre, DP 74.

  • Segoviano, M. A., and C. A. E. Goodhart. 2009. “Banking Stability Measures.IMF Working Paper, n. WP/09/4, International Monetary Fund, Washington, DC.

    • Search Google Scholar
    • Export Citation
  • Segoviano, M. A., and P. Padilla. 2006. “Portfolio Credit Risk and Macroeconomic Shocks: Applications to Stress Testing Under Data-Restricted Environments,IMF Working Paper, n. WP/06/283, International Monetary Fund, Washington DC.

    • Search Google Scholar
    • Export Citation
  • Tressel, T. 2010. “Financial Contagion through Bank Deleveraging: Stylized Facts and Simulations Applied to the Financial Crisis.IMF Working Paper, n. WP/10/236, International Monetary Fund, Washington, DC.

    • Search Google Scholar
    • Export Citation

Appendix I. Generalization of Conditional Expected Valuation Formulas

We provide in this appendix the generalization of the value computed in Section VI.

Allowing for any conditioning event.

Let the CIMDO posterior density of assets of banks {A1, …, AN} be p(x1, …, xN). This is the density on equity annualized returns (but it is not normally distributed). Given this distribution, we want to calculate the expected value of the assets of the bank Ai given the defaults of the banks {Ak1, …, Akl), the other banks not defaulting. This is equivalent to assessing the expected value of the bank Ai assuming the realization of the following event:

Dji(k1,,kl)=Ajki{k1,kl}Akiki{1,N\{i,j,k1,kl}}Aki¯

It is important to note that we do not make any assumption on the value of the bank Ai since we want to assess its value and the probability of its default.

The expected value of the bank Ai assuming the realization of the event Dji(k1,,kl) is then defined as follows:

E0(VAi,t|Dji(k1,,kl))=1P(Dji(k1,kl))E(VAi,t1Dji(k1,kl))=1P(Dji(k1,kl))VAi,t(x1,xN)p(x1,xN)s{j,k1,kl}I(xs>Xsd)s{1,N\{i,k1,kl}}I(xs<Xsd)dx1dxN

We don’t know VAi,t (x1, …, xN), but we can calculate it as the sum of the value of debt Dt and the value of equity (for which we know the initial value Eq0) as well as the growth rate –x (note that using the CIMDO notations in Segoviano (2006), a high x is a low return since the region of default is with high x):

VAi,t(x1,xN)=Eqt(x1,,xN)+Dt(x1,xN)

Thus, today’s total assets value of the bank Ai conditioning on the event Dji(k1,,kl) is:

E0(VAi,t|Dji(k1,,kl))=1P(Dji(k1,,kl))Eqt(x1,,xN)p(x1,,xN)s{j,k1,kl}I(xs>Xsd)s{1,N\{i,k1,kl}}I(xs<Xsd)dx1dxN+1P(Dji(k1,kl))Dt(x1,xN)p(x1,,xN)s{j,k1,kl}I(xs>Xsd)s{1,N\{i,k1,kl}}I(xs<Xsd)dx1dxN

The value of the bank Ai’s debt (discounted appropriately given a book value DT at time T) depends on whether the bank Ai defaults or not:

In the zone where the bank Ai defaults (xi>Xid),Dt(x1,,xN)=(RR)DTer(Tt)

In the zone where bank Ai does not default (xi<Xid),Dt(x1,,xN)=er(Tt)

where RR is recovery rate (that is, one minus the loss given default). Thus, we have:

E0(VAi,t|Dji(k1,,kl))=1P(Dji(k1,kl))Eq0ex1p(x1,xN)s{j,k1,kl}I(xs>Xsd)s{1,N\{i,k1,kl}}I(xs<Xsd)dx1dxN+P(AiDji(k1,kl))P(Dji(k1,kl))(RR)DTer(Tt)+P(A¯lDji(k1,kl))P(Dji(k1,kl))DTer(Tt)

Appendix II. Generalization of the SE

Definition of the vulnerability index assuming any conditioning event.

Assuming the realization of a given event S, we define the vulnerability index that represent the impact of the assumed default on the bank Ai as follows:

V(Ai|S)=LossTS(Ai|S)TA(Ai)

The vulnerability index of bank Ai assuming the default of bank Aj is then defined as the total loss under a systemic event LossTS(Ai|S) of the bank Ai divided by its total assets TA(Ai). The higher this index, the more affected the bank Aj is when the bank Aj defaults. By computing this index for any financial institution, we can provide a vulnerability ranking of all the FIs that form our network, assuming the realization of a given default.

Decomposition of the SE loss.

Let us formalize the decomposition of the SE loss of bank Ai assuming the realization of the event S. We consider a financial network that is made of a set of N banks denoted {A1, …, AN}. In each subset of the partition, each bank (except bank Ai) is either defaulting or surviving:

Dsi¯(k1,,kl)=Ski{k1,,kl}Akiki{1,N\{i,S,k1,,kl}}Aki¯

If one of this subset is empty, we exclude it of the partition. It is clear that all these events are disjointed, and that their union for any possible {k1,,kl}P(1,N\{i,S}) is equal to S.

Using the law of total expectation, we have:

E(VAi|S)={k1,,kl}P(1,N\{i,S})P(DSi(k1,,kl)|S)E(VAi|DSi(k1,kl))

where P(DSi(k1,,kl)|S) is the probability of the event DSi(k1,,kl) given that the event S occurs. The value of bank Ai, assuming the realization of the event S, is then the weighted average of the value of bank Ai over the subsets that form the partition of S. These sets are weighted by their probability, assuming the realization of S. Let us now show that this decomposition also holds considering SE losses. Using the fact that {DSi(k1,,kl)|{k1,,kl}P(1,N\{i,S})} is a partition of S, by the law of total probability, we have:

{k1,,kl}P(1,N\{i})P(DSi(k1,,kl)|S)=1

Thus, we have:

LossSE(Ai|S)=E(VAi)E(VAi|S)=E(VAi){k1,,kl}P(1,N\{i,S})P(DSi(k1,,kl)|S)E(VAi|DSi(k1,,kl))={k1,,kl}P(1,N\{i,S})P(Dji(k1,,kl)|S)[E(VAi)E(VAi|DSi(k1,,kl))]={k1,,kl}P(1,N\{i,S})P(Dji(k1,,kl)|S)LossSE(Ai|DSi(k1,,kl))

Indexes of the decomposition of the SE loss

An aggregate measure of the contribution of a defaulting set (that includes the bank that initially defaults) on the SE loss of the considered bank assuming the realization of the default S is:

Co(DSi(k1,,kl)|S)=P(DSi(k1,,kl)|S)LossSE(Ai|DSi(k1,,kl))LossSE(Ai|S)

The index Co(DSi(k1,,kl)|S) represents the relative contribution of the defaulting set DSi(k1,,kl) to the conditional loss of bank Ai assuming the realization of the event S. For a policymaker, this measure is particularly informative since it allows to spot the connecting entities that induces a large share of the SE losses of the tested bank assuming the realization of a given event. As highlighted in our four banks example, this contribution can be large either because:

  • the considered defaulting set is likely to materialize assuming the realization of a given event. This property can be assessed using the probability of occurrence of this set assuming the realization of a given event:
    P(DSi(k1,,kl)|S)
  • the considered defaulting set inflicts large losses to the tested bank assuming the realization of a given event. This property can be assessed using the ratio of the SE loss of the tested bank assuming the realization of this defaulting set, to the SE loss of the tested bank assuming the realization of the considered event:
    In(DSi(k1,,kl)|S)=LossSE(Ai}DSi(k1,kl))LossSE(Ai|S)

    This ratio lies between zero and infinity. The larger this ratio, the more intense the realization of this defaulting set is with respect to the tested bank (assuming the realization of the event S).

Appendix III. Consistent Information Multivariate Density Optimization

The detailed formulation of CIMDO was first presented in Segoviano (2006). The objective of CIMDO, which is based on the Kullback (1959) minimum cross-entropy approach, is to construct a multivariate density q that characterizes the asset values for a system of firms, taking into account a prior distribution p and information on PoDs and thus inferring a dependence structure at the tail that is consistent with the marginal information at the tail.

For illustration purposes, we focus on a portfolio containing two firms, which asset returns are distributed by the random variables x and y. The objective function of CIMDO is to search for a distribution q closest to the prior p, according to the criteria

C[p,q]=p(x,y)ln[p(x,y)/q(x,y)]dxdy

and consistent with the observed information on marginal PoDs, which are represented by the moment-consistency constraints

p(x,y)χ(xdx,)dxdy=PoDtx,p(x,y)χ(xdy,)dydx=PoDty

PoDtx and PoDty are the empirically observed probabilities of default (PoDs) for each firm in the portfolio and χ(xdx,),χ(xdy,) are the indicating functions defined with the default thresholds for each borrower in the portfolio. In order to ensure that p (x, y) represents a valid density, the conditions that p (x, y)>0 and the probability additivity constraint, ∬p (x, y) dxdy=1, also need to be satisfied. The CIMDO density is recovered by minimizing the functional

L[p,q]=p(x,y)lnp(x,y)dxdyp(x,y)lnq(x,y)dxdy+λ1[p(x,y)χ(xdx,]dxdyPoDtx]+λ2[p(x,y)χ[xdy,)dydxPoDty]+μ[p(x,y)dxdy1]

Where λ1λ2 represent the Lagrange multipliers of the moment-consistency constraints and represents the Lagrange multiplier of the probability additivity constraint. By using the calculus of variations, the optimization procedure is performed. The optimal solution is represented by the following posterior multivariate density as

p^(x,y)=q(x,y)exp([1+μ+λ1χ[Xdx,[+λ2χ[Xdy,[])
Appendix Figure 1.
Appendix Figure 1.

CIMDO Distribution, by Default Zone

Citation: IMF Working Papers 2018, 049; 10.5089/9781484345344.001.A001

Figure 1.1 shows how the posterior distribution differs from the prior distribution in the four different zones defined by whether x and y are in the zone of default. For instance, in zone where x defaults but not y (bottom right quadrant), the posterior density is the prior density adjusted for a coefficient which is a function of λx, the Lagrange multiplier for the restriction that the posterior is consistent with the observed PoDx, and a function of μ, the Lagrange multiplier for the restriction that the posterior distribution sums to 1.

A Taylor approximation of the Lagrange multipliers gives some intuition of how the observed data on PoDs and the calibration of the prior affect the Lagrange multipliers and thus the posterior (see proof in Segoviano and Espinoza 2017). For a prior calibrated as a centered t-distribution with ν degrees of freedom and a correlation coefficient σ, the Taylor approximation yields:

λx=ln(PoDx)1μ+ln(Q˜xyeλy+Q˜xy¯+(eλy1)Jσ+ϑ(σ2))λy=ln(PoDy)1μ+ln(Q˜xyeλx+Q˜x¯y+(eλy1)Jσ+ϑ(σ2))μ=1+ln(Q˜xyeλxeλy+Q˜x¯yeλy+Q˜xy¯eλx+Q˜x¯y¯(eλxeλyeλyeλx+1)Jσ+ϑ(σ2))

where J=νν22π(ν+Xdx2+Xdy2)ν/2

The approximation shows that Lagrange multipliers depend on the prior correlation coefficient but when λx and λy are close to zero or when the default zones are small (i.e. Xdx,Xdy, and thus J → 0, the adjustment between prior and posterior is insensitive to σ.

Appendix IV. Comparison of Our Proposed SE-Loss Approach with Network Models

We compare here the macroprudential stress-testing framework we proposed with the network model of Eisenberg and Noe (2001) and Cifuentes, Ferrucci, and Shin (2005), as these have been at the core of many macroprudential stress tests models used in central banks. The algorithm developed by Eisenberg and Noe (2001) considers the default by banks that are part of a single clearing mechanism. The liabilities of all banks are determined simultaneously under some accounting and behavioral rules defined ex ante. Using a fixed-point argument, Eisenberg and Noe (2001) shows that applying these payment rules, there always exists at least one vector that clears the obligations of all the firms. The algorithm requires information on bilateral banks liabilities and the contagion channel result from interbank repayments that are lower than expected (that is, a direct linkage).

There are several (nontrivial) assumptions about the repayment rules that are needed to ensure the existence of an equilibrium. (i) limited liability—the total repayment of a bank to the other banks must be smaller than the cash flow owned by this bank; (ii) absolute priority—if a bank cannot repay fully its liabilities, its total repayment must be equal to its total cash flow; and (ii) proportionality—if a bank defaults, all claimant banks are repaid proportionally to the size of their nominal claims on the defaulted bank. Under these conditions, existence and uniqueness of a fixed point are guaranteed, and Eisenberg and Noe (2001) proposed an algorithm to find the fixed point, which, importantly, is the only state in which the banks payments are mutually consistent.

The algorithm was extended by Cifuentes, Ferrucci, and Shin (2005) to introduce fire sales from distressed banks as an additional contagion channel. Additional structural assumptions are made, in particular on the shape of the relationship between the price and the supply of illiquid assets.

Three essential difference can be highlighted between the contribution of these network models and our proposal (see also summary in Table 4.1):

  • Our proposal makes no structural assumptions about the amplification mechanisms in the financial network. On the other hand, our proposal also cannot explain the channels of contagion.

  • Our algorithm is stochastic and thus allows us to consider a large set of events following a given event. Eisenberg and Noe’s (2001) algorithm is deterministic; that is, the structural assumptions ensure the existence of a unique fixed point for any value of the inputs. It solves for a fixed point that is the only state for which the banks’ behaviors are mutually consistent. Cifuentes, Ferruci, and Shin’s (2005) algorithm, adding a fire-sale channel, also formulates a fixed-point problem without any stochastic dimension. In our macroprudential framework, the stochastic nature of the framework enables us to describe in terms of likelihood and intensity of any contagion event following the realization of a given default.

  • Finally, our proposed method embeds the possible direct and indirect amplification mechanisms captured by market data, at least to the extent market participations price these risks appropriately. The network models have mostly considered direct contagion channels that have included interbank exposures, fire sales, and in some other models—for example, Tressel 2010—liquidity contagion.

Appendix Table 1.

A Comparison: SE-Loss and Network Contagion Models

article image

1

The authors are grateful for the useful comments and suggestions from T. Adrian, A. Elizondo, D. Demekas, C. Goodhart, P. Hartmann, H. Huang, N. Liang, C. Raddatz, D. Schoemaker, D. Tarullo and Y. Zhao. Special thanks to Felipe Nierhoff for his invaluable research assistance.

2

A reduced form model in this paper refers to a model that allows inferring from market data the interconnectedness structures across entities (that define systemic risk losses) without explicitly modeling the mechanisms that generate such structures.

3

The CIMDO methodology is based on the minimum cross-entropy approach, where a posterior multivariate distribution—the CIMDO density—is recovered using an optimization procedure by which a prior density function is updated with empirical information via a set of constraints. In this implementation, the empirical estimates of the probability of distress of individual banks act as the constraints, and the derived CIMDO density is the posterior density that is the closest to the prior distribution and consistent with these constraints. This methodology and its advantages relative to other parametric multivariate densities are presented in detail in Segoviano (2006) and Segoviano and Espinoza (2017).

4

Elsinger and others (2013) also noted that the losses predicted by network models of interbank exposures were too small and thus had not been useful for policymaking during the 2008–09 financial crisis.

5

Other empirical models focus on pairwise metrics of dependence, and in most cases, linear dependence—that is, on correlations. However, CIMDO infers the entire multivariate density that characterizes the complete systems’ asset value interconnectedness structure (copula function) that incorporates linear and nonlinear dependence across the asset values of the FIs making up the system.

6

Although these conditional losses may appear to be akin to contagion losses in network models, our approach is very different. Our method does not make structural assumptions about the amplification mechanisms in the system and does not assume causality of contagion. Instead our method infers the implied direct and indirect channels of contagion from publicly available data and is stochastic (see Appendix IV for a more detailed description of the differences between contagion in network models and our approach).

7

Cortes and others (2018) present an extension of the model that incorporates into the analysis of systemic risk insurance companies, pension funds, investment funds and hedge funds.

8

These models usualy assume deterministic (mechanistic) reaction functions; e.g., banks’ deleveraging behavior. Such assumptions may lead to biased estimates of contagion.

9

Three differences can be highlighted between network models and the proposed reduced-form approach. The proposed approach makes no structural assumption about the amplification mechanisms in the financial system. The network models are structural; that is, they make assumptions about the structure of the financial network and contagion mechanism, whereas the proposed framework infers from market-based data the implied connections, and thus makes no structural assumptions about the amplification mechanisms in the system. The reduced-form approach is stochastic; hence, it allows us to consider a large set of events conditioning on a given event. Network models are deterministic. Finally, when estimated with market-based data, the proposed reduced-form approach embeds all possible direct and indirect amplification mechanisms captured by market data, whereas network models only consider (usually direct) amplification mechanisms explicitly modelled.

10

Krishnamurthy (2010) designs a model to analyze how the uncertainty of investors in certain types of assets, especially assets coming from recent financial innovations, can lead to a run to safety after the shock occurs occurred and a sudden escape from these innovative products. Similarly, Kaszowska and Santos (2014) show that some methods from the sociological and behavioral sciences can be applied to more effectively model how market participants’ risk perceptions about the state of the market, and their expectations about other participants’ reactions to a shock, may cause a vicious feedback loop, and therefore accentuate the consequences of the initial shock.

11

It should be noted that, while sharing some common elements, these models are conceptually different. Since it is not the objective here to survey this extensive literature, we refer to the detailed survey of such methods that is available in Bisias and others (2012).

12

See “The IMF Framework for Banks’ Balance Sheet Stress Test.” IMF (2016).

13
One may also think that micro prudential stress tests exclude de facto some events when assessing the value of a financial institution under an adverse macroeconomic scenario (for instance they may exclude the defaults of other banks). In this case, the micro stress tests valuation should be defined as VAMiST=E(VA|advD¯) where the events excluded in a micro stress test are denoted by the set D. The SE loss, introduced below, would then be consistently defined as follows:
LossSE(A|S)=E(VA|advD¯)E(VA|advS).

With this formula, it is still possible to decompose the SE loss using the approach proposed in this paper.

14

There is a suite of models that can be used to estimate these parameters, including Merton model—type approaches, Value at Risk approaches, nonarbitrage models that rely on credit default swap spreads and bond spreads.

15

To simplify the conditioning notations, we do not explicitly condition on the occurrence of the distressed macroeconomic scenario throughout the rest of the paper. However, all the values and losses considered in this paper are computed assuming the occurrence of a distressed macroeconomic scenario.

16

This extension requires the inclusion of additional marginal densities (that describe the the level of financial imbalanes) to the multivariate distribution that characterizes the financial system. This allows to quantify SE losses conditional on realizations of financial imbalances. This extension allows to bring together in a consistent manner the “time dimension” of systemic risk (usully characterized by the level of financial imbalances) to the “structural” dimension of systemic risk (characterized by the interconectedness structures that define SE losses), usually adressed separately in the systemic risk literature.

17

Since the multivariate distribution framework cannot identify causality, the word ‘‘due’’ should not be interpreted as suggesting a causal link.

18

Moody’s-KMV methodology (Crosbie and Bohn 2003) is a widely-known implementation of this approach.

19

Koyluoglu and others (2003) present an interesting analysis of the consequences of the improper calibration of risk models.

20

Using an extension of the Probability Integral Transformation (PIT) criterion advocated by Diebold, Gunther, and Tay (1998), the paper shows that CIMDO-inferred density forecasts perform better than parametric distributions forecasts, even when they are calibrated with the same information set.

21

CIMDO-inferred dependence structures embody both linear and nonlinear distress dependence among FIs and are time-varying. They are thus superior to dependence structures from Gaussian models that capture only linear dependence (correlations) and from other parametric models that while expanding to nonlinear dependence, assumingassume structures to remain constant though time.

22

It should be emphasized that the individual PoDs used as input to the calculation of the multivariate density function are exogenous to the model. Any methodology could in principle be used to estimate them, including balance sheet–based methodologies, which can be usually implemented in countries where no market data is available or reliable. Using market data is thus not a necessary feature of this model. However, as the time-varying nature of the CIMDO interconectedness structure is a key strength of this approach, high-frequency market data, such as CDS spreads or stock returns, are typically used to calculate the probabilities of distress or default for individual FIs.

23

Risk parameters of banks’ loan portfolios (loans’ probabilities of default, exposures and loss-given default) are used to estimate banks’ loss distributions (PLD). Supervisory information is used to define thresholds of capital buffers that if violated would indicate a distress event, e.g., supervisory intervention. PLDs and thresholds are then used to estimate the banks’ probability of distress, e.g., the probability of violating the supervisory threshold.

24

SE losses were estimated using equation 5.

25

Basel Committee on Banking Supervision 2011.

26

The CCoB is comprised of Common Equity Tier 1 (CET1) and imposes distribution constraints on banks as their capital ratio deteriorates. Specifically, banks that draw on this buffer but are not yet in violation of minimum capital requirements can continue their operations but must retain a significant portion of their earnings to rebuild the capital stock.

27

This guide was based on an analysis that showed that a credit-to-GDP ratio of 10 percentage points or more above trend issues the strongest signal of an impending crisis (in terms of noise-to-signal ratio). Per the BCBS buffer guide formula, when the credit gap breaches a “lower threshold” of 2 percent, a decision to start increasing the buffer could be merited if surveillance supports the judgment that systemic risk may be building up; and, when it reaches the “upper threshold” of 10 percent, the CCyB should be set at 2.5 percent of RWA. It can also be set higher, based on broader macroprudential considerations (International Monetary Fund, 2014).

Macroprudential Stress Tests: A Reduced-Form Approach to Quantifying Systemic Risk Losses
Author: Zineddine Alla, Mr. Raphael A Espinoza, Qiaoluan H. Li, and Miguel A. Segoviano