The rise in the complexity and globalization of financial services has contributed to stronger interconnections or linkages. While more extensive linkages contribute to economic growth by smoothing credit allocation and allowing greater risk diversification, they also increase the potential for disruptions to spread swiftly across markets and borders. In addition, financial complexity has enabled risk transfers that were not fully recognized by financial regulators or by institutions themselves, complicating the assessment of counterparty risk, risk management, and policy responses. Thus the importance of assessing the systemic implications of financial linkages.


The rise in the complexity and globalization of financial services has contributed to stronger interconnections or linkages. While more extensive linkages contribute to economic growth by smoothing credit allocation and allowing greater risk diversification, they also increase the potential for disruptions to spread swiftly across markets and borders. In addition, financial complexity has enabled risk transfers that were not fully recognized by financial regulators or by institutions themselves, complicating the assessment of counterparty risk, risk management, and policy responses. Thus the importance of assessing the systemic implications of financial linkages.

The current crisis has highlighted how systemic linkages can arise not just from financial institutions’ solvency concerns but also from liquidity squeezes and other stress events. This chapter illustrates the type of methodologies that can provide some prospective metrics to facilitate discussions on systemic linkages and, specifically, the “too–connected–to–fail” problem, thereby contributing to enhanced systemically focused surveillance and regulation. By contrast, Chapter 3 presents other methodologies that examine systemic risk by looking at the conditions under which financial institutions experience simultaneous stressful events.

This chapter presents four complementary approaches to assess direct and indirect financial sector systemic linkages:

  • The network approach, which tracks the reverberation of a credit event or liquidity squeeze throughout the banking system via direct linkages in the interbank market;

  • The co–risk model, which exploits market data to assess systemic linkages among financial institutions under extreme events;

  • The distress dependence matrix, which examines pairs of institutions’ probabilities of distress, taking into account a set of other institutions; and

  • The default intensity model, which measures the probability failures of a large fraction of financial institutions due to both direct and indirect systemic linkages.

The chapter argues that, although each approach by itself has its limitations, together they represent a set of valuable surveillance tools and can form the basis for policies to address the tooconnected–to–fail problem. More specifically, this chapter assists policymakers in two areas under current discussion:

  • Perimeter of regulation. To maintain an effective perimeter of prudential regulation without stifling innovation, the tools provided in the chapter could help address questions such as whether to limit an institution’s exposures, the desirability of capital surcharges based on systemic linkages, and the merits of additional liquidity regulations.

  • Information gaps. The chapter also discusses the importance of filling existing information gaps on cross–market, cross–currency, and cross–country linkages to refine analyses of systemic linkages. Closing information gaps would require improved data collection procedures and impose additional demands on financial institutions, but would be a far better alternative to waiting until a crisis ensues to obtain information as events unfold.

The expansion of large complex financial institutions that transcend national boundaries and engage in such activities as extensive interbank contracts, over–the–counter derivatives contracts, equity, bond, and syndicated loan issuance, and trading activities globally has led to stronger interconnections, innovation, and growth. While tighter interdependencies can increase the efficiency of the global financial system by smoothing credit allocation and risk diversification, they have also increased the potential for cross–market and cross-border disruptions to spread swiftly. In addition, financial innovations have enabled risk transfers that were not fully recognized by financial regulators and institutions themselves, and have complicated the assessment of counterparty risk, risk management, and policy responses.

Although linkages across institutions have traditionally focused on solvency concerns, the current crisis reminds us of the relevance of liquidity spillovers, specifically that (1) interconnectedness means ofies in rolling over liabilities may spill over to the financial system as a whole; and that (2) rollover risk associated with short-term liabilities is present not only in the banking sector but, equally importantly, in the nonbank financial sector.

Thus, it is essential to improve our understanding and monitoring of direct and indirect financial systemic linkages, including by strengthening techniques to assess systemic linkages, and thereby contribute to making systemicfocused supervision feasible. The goal is clear: we must lessen the risk that institutions become too connected to fail.1

This chapter presents four complementary approaches to assess financial sector systemic linkages and focuses on this definition of systemic risk:2

  • The network approach. This approach relies primarily on institutional data to assess network externalities.3 Network analysis, which can track the reverberation of a credit event or liquidity squeeze throughout the system, can provide important measures of financial institutions’ resilience to the domino effects triggered by financial distress.

  • The co-risk model. This methodology draws from market data, but focuses on assessing systemic linkages at an institutional level. Such linkages may arise from common risk factors such as similar business models or common accounting/ valuation practices across institutions.

  • The distress dependence matrix. This matrix is based on market data, but instead of looking at bilateral relationships as above, the pairwise conditional probabilities of distress presented are estimated using a composite time–varying multivariate distribution that captures linear (correlation) and nonlinear interdependence among a set of financial institutions.

  • The default intensity model. Based on historical default data, this methodology focuses on the time-series properties of banking default data to assess systemic linkages. It measures the probability of failures of a large fraction of financial institutions (default clustering) due to both direct and indirect systemic linkages.

Each approach by itself has considerable limitations, but together the approaches provide an important set of surveillance tools and the basis for policies to address the too-connected-to- fail problem, one of the most pervasive ways in which systemic risk manifests itself.4 More specifically, this chapter helps to inform policymakers in three areas: assessing direct and indirect spillovers under extreme (tail) events; identifying information gaps to improve the precision of this analysis; and providing concrete metrics to assist in the reexamination of the perimeter of regulation.

The chapter also discusses the importance of filling existing information gaps on cross–market, cross-currency, and cross–country linkages. Closing information gaps would require, among other things, additional disclosures; access to micro-prudential data from supervisors (where these are institutionally separated from the authorities responsible for financial stability); more intensive contacts with private market participants; improving the comparability of cross–country data; more frequent updates of monitored financial variables; and improved information–sharing on a regular and ad hoc basis. Although these measures could impose additional demands on financial institutions, they are a far better alternative to waiting until a crisis ensues and having to scramble to obtain information as events unfold. It has become clear during the current crisis that much greater transparency on cross–institution and crossmarket exposures was needed ex ante. Furthermore, globalization means that it is almost impossible for a country, by itself, to undertake effective surveillance of potentially systemic linkages. Therefore, enhancing our understanding and monitoring of global systemic linkages requires strong information–sharing agreements.

Because of ofies in obtaining more disaggregated information at this stage, the chapter cannot make predictions about specific institutions or countries with important systemic linkages. The goal is not to provide benchmark figures of systemic linkages or to make fore casts about future developments. Rather, its key goal is to present methodologies that will enable inferences to be drawn about extreme tail events, such as the current crisis, and that can also provide a set of concrete metrics that could be used by the authorities before they can start any meaningful discussions, both domestically and globally, on the too-connected-to-fail problem.

The chapter also presents a brief overview of how some central banks assess systemic linkages, including by exploiting methodologies similar to those illustrated in this chapter. These methodologies are gaining traction in financial stability discussions, despite handicaps central banks have faced due to some important data limitations

Four Methods of Assessing Systemic Linkages

This section presents four complementary approaches to assess financial sector systemic linkages: the network approach, which tracks the reverberation of a credit event or liquidity squeeze throughout the financial system; the co-risk model, which exploits market data to assess systemic linkages at an institution–by–institution level, conditioning on other economic information; the distress dependence matrix, which provides conditional probabilities of distress between two institutions taking account of their relation with other institutions; and the default intensity model, which measures the probability of failure of a large fraction of financial institutions (default clustering) due to both direct and indirect systemic linkages Table 2.1.

Table 2.1.

Taxonomy of Financial Linkages Models

article image
Source: IMF staff.Note: BSI = bank stability index; CDS = credit default swap; JPoD = joint probability of distress; PoD = probability of default.

Giesecke and Kim (2009).

Chan–Lau, Espinosa, and Solé(2009b); and Segoviano and Goodhart (2009). See also the section in Chapter 3 entitled “Market Perceptions of Risks of Financial Institutions.”

Model can use PoDs estimated from alternative methods, not only CDS spreads.

The Network Approach

The recent financial crisis has underscored the notion that to ensure the stability of a financial system, it is not enough to focus on the safety and soundness of each particular institution. It is also necessary to account for the effect of the institution’s linkages to other institutions, as actions geared to enhancing the soundness of a particular institution may undermine the stability of the system as a whole. This is the case, for instance, when a fire sale of assets during a liquidity squeeze triggers spillovers across the whole financial system. The case of Northern Rock illustrates how a medium–sized institution faced with a liquidity squeeze can trigger negative network externalities.

Policymakers and regulators worldwide have become aware of the importance of proactively tracking potential systemic linkages. As pointed out in Allen and Babus (2008), for instance, network analysis is a natural candidate to aid with this challenge, as it allows the regulator to see beyond the immediate “point of impact” by tracking several rounds of spillovers likely to arise from direct financial linkages.5

The starting point of any network analysis is the construction of a matrix of inter–institution exposures that includes gross exposures among financial institutions (domestically or crosscountry). The main ofies in creating a comprehensive, cross–border matrix include the fact that data may only be available to national supervisors and that some of the information is not collected or published on a systematic basis.6 For instance, although banks typically report broad exposures to other institutions or countries, data on bilateral exposures are not publicly available and may be disclosed exclusively to financial regulators, and only upon request. In order to circumvent these limitations, researchers have often complemented the available data with interpolations or estimations by different methods.7 Once an exposure matrix is in place, analysts simulate shocks to specific institutions and track the domino effect on other institutions in the network, as shown in Figure 2.1

A Simple Interbank Exposure Model

To illustrate how network analysis is deployed to assess potential systemic interbank linkages, this chapter considers two shocks: (1) a credit event in which the initial default by an institution may trigger additional rounds of defaults, and (2) a credit–plus–funding event in which the default of an institution also causes a liquidity squeeze to those institutions funded by the defaulting institution (i.e., the credit shock is compounded by a funding shock). (See Box 2.1. for a detailed explanation of the simulation methodology).

Because individual institution exposure data are not available to the International Monetary Fund (IMF), the chapter uses cross–country bilateral exposures published in the Bank for International Settlements’ (BIS) International Banking Statistics database for March 2008, which reflects the consolidated foreign exposures of BIS reporting banks.8,9 The BIS compiles these data in two formats: (1) on an immediate borrower basis, and (2) on an ultimate risk basis. The former are consolidated by residency of the immediate borrower, whereas the latter are consolidated by residency of the ultimate obligor (i.e., the party that is ultimately responsible for the obligation in case the immediate borrower defaults).10 We restrict our analysis to aggregate interbank credit exposures with a special focus on immediate borrower basis data for March 2008.11

Figure 2.1.
Figure 2.1.

Network Analysis: A Diagrammatic Representation of Systemic Interbank Exposures

Source: IMF staff.Note: This figure depicts the dynamics of the network analysis. Starting with a matrix of interbank exposures, the analysis consists of simulating shocks to a specific institution (the trigger bank) and tracking the domino effect to other institutions in the network.

Credit Shock and Transmission

To illustrate the analysis of a credit shock using network analysis, the chapter simulates the individual default (one-at-a-time) of each country’s cross-border interbank claims and then tracks the domino effects triggered by this event. For simplicity, it is assumed that a country’s banking losses are fully absorbed by its capital, and a country’s banking sector is said to fail when its collective (aggregate) capital is not sufficient to fully cover the losses incurred under default of its cross-border interbank losses.

It is important to emphasize that this hypothetical experiment envisioning a country’s banking system defaulting on its foreign exposures is extreme and highly unlikely.12 In addition, the experiment does not consider risk transfers among banking sectors due to lack of data, and also because accounting for this protection properly would require an analysis of the underlying counterparty risks, which is beyond the scope of this chapter. The main objective of this exercise is to provide an illustration of the value of network analysis for surveillance purposes; the analysis of further hypothetical experiments, with perhaps more realistic assumptions, is left for future work.

Simulation 1 Results

The first simulation focuses on the transmission of a pure credit shock assuming that all institutions are able to roll over their funding needs.13 The results of these simulations are reported in Table 2.2. It is important to highlight that in addition to identifying potential failures, network analysis also helps in estimating the amount of capital losses after all aftershocks have taken place. Not surprisingly, given the size of the U.K. and U.S. banking sectors, what emerges from this exercise is that those two banking systems are the largest systemic players. As of March 2008, the hypothetical default of the U.K. and the U.S. systems on their interbank foreign claims would have led to losses—after all contagion rounds—of 44.6 and 80 percent, respectively, of the combined capital in our universe of banking systems.

Figure 2.2.
Figure 2.2.

Network Analysis: Number of Induced Failures

Sources: Bank for International Settlements; and IMF staff estimates

The second and third columns in Table 2.2 indicate the number of induced failures and the number of contagion rounds (the aftershocks) triggered by each hypothetical failure. The failure of the U.K. banking system would trigger the downfall of seven additional banking systems in three rounds of contagion (see also Figure 2.2). Similarly, the failure of the U.S. banking system would trigger the failure of 10 additional banking systems in four rounds of contagion Figure 2.2.

Network Simulations of Credit and Liquidity Shocks

This box outlines the mechanics of the simulations of credit and liquidity shocks in the network model.

To assess the potential systemic implications of interbank linkages, a network of N institutions is considered. The analysis starts with the following stylized balance sheet identity of a financial institution:


where xji stands for bank i loans to bank j, ai stands for bank i’s other assets, ki stands for bank i’s capital, bi are long-term and shortterm borrowing (excluding interbank loans), xij stands for bank i borrowing from bank j, and di stands for deposits.

To analyze the effects of a credit shock, the chapter simulates the individual default of each one of the N institutions in the network, and then tracks the domino effects resulting from each specific failure. More specifically, for different assumptions of loss given default (denoted by the parameter λ), it is assumed that bank i’s capital absorbs the losses on impact, and then we track the sequence of defaults triggered by this event. For instance, after taking into account the initial credit loss stemming from the default of institution h, the baseline balance sheet identity of bank i becomes:


and bank i is said to fail when its capital is insufficient to fully cover its losses (i.e., when Ri–λxhi<0), (these losses are depicted in light green in the figure).1

To analyze the effects of a credit-and-funding shock scenario, it is assumed that institutions are unable to replace all the funding previously granted by the defaulted institutions, which, in turn, triggers a fire sale of assets. Thus, we study the situation where bank i is able to replace only a fraction (1–ρ) of the lost funding from bank h, and its assets trade at a discount (i.e., their market value is less that their book value), so that bank i is forced to sell assets worth (1 + λ) Φxih in book value terms.2 The chapter assumes that the funding–shortfall–induced loss, λΦxih, is absorbed by bank i’s capital (figure). Thus, the new balance sheet identity for institution i is given by


In closing, network analysis allows assessment of the domino effects of different types of shocks throughout the network of financial institutions.

Note: Juan Solé prepared this box. For more details on the network model and the simulation algorithm, see Chan–Lau, Espinosa, and Solé (2009a).1 Subsequent rounds in the algorithm take into account the losses stemming from all failed institutions up to that point.2 An alternative way to see this is the following. Let ρx be the amount of funding that cannot be replaced. Let ρ1 be the current market price for assets and let y be the quantity of assets sold. That is, ρ1 y = ρx. Suppose that these assets had been bought at a higher price ρ0 thus ρx = ρ1 y<ρ0 y = ρx(1+δ). Hence, it is possible to find a relationship between the parameter d and the change in asset prices: δ = (p0–p1)/p1, i.e., d is a parameter reflecting the degree of distress in asset markets. Higher and reflects higher distress in markets.

Interestingly, even when domino effects do not lead to systemic failures, network analysis provides a measure of the degree to which a financial system will be weakened by the transmission of financial distress across institutions (Table 2.3). For instance, an initial failure of Germany would produce a projected capital loss to Australian banks of only 0.2 percent of their initial capital, whereas the projected loss for Sweden would amount to 103 percent of initial capital, thus driving Swedish banks to hypothetical default.

The analysis can also help identify “vulnerable” spots. For example, while the United Kingdom and United States were identified as the most systemic systems (i.e., triggering the largest number of contagion rounds and highest capital losses), Belgium, the Netherlands, Sweden, and Switzerland are the banking systems with the highest hazard rates, defined as the number of times a banking system would have hypothetically failed Table 2.2 and Figure 2.3.14 In other words, the banking systems of these countries are severely affected in at least three of the 15 simulations in which they were not the trigger.

As illustrated in Figure 2.4, an additional advantage of network simulations is that the path of contagion can be tracked. Consider the case of a hypothetical default of the U.K.’s crossborder interbank loans. Figure 2.4 features the ensuing contagion path. The exercise shows that Belgium, Ireland, the Netherlands, and Switzerland are affected in the first round. The combination of these five defaults is systemic enough to bring down Germany in the second round of contagion. Notice that although Germany was able to survive the initial U.K. failure, it is not capable of resisting the combined hypothetical failure of these five banking systems. By the third and final round, France would have also become a casualty.

Figure 2.3.
Figure 2.3.

Network Analysis: Country-by-Country Vulnerability Level

Sources: Bank for International Settlements; and IMF staff estimates

Credit–and–Funding Shock and Transmission

Under the credit–and–funding shock scenario, it is assumed that institutions are unable to replace all the funding previously granted by the defaulted institutions, thus triggering a fire sale of assets.15 The extent to which a bank is able to replace an unforeseen drop in interbank funding will depend on liquidity conditions in the money market. During the present crisis, for instance, complexity and opacity in interbank activities have made banks reluctant to support troubled counterparties or institutions perceived to be going through similar events, even if they were not. Interbank operations are typically undertaken under the assumption of abundant instantaneous liquidity in money and capital markets. However, when liquidity is tight and in the absence of alternative sources of funding, a bank may be forced to sell part of its assets in order to restore its balance sheet identity. The chapter studies the situation where a banking system is able to replace only a fraction of the lost funding and its assets trade at a discount (i.e., their market value is less than their book value), so that a bank is forced to sell assets with higher book value than market value.16

Under this scenario, a financial institution’s vulnerability not only stems from its direct credit exposures to other institutions, but also from its inability to roll over (part of) its funding in the interbank market, having to sell assets at a discount in order to reestablish its balance sheet identity.

Simulation 2 Results

This simulation considers the effects of a joint credit and liquidity shock assuming a 50 percent haircut in the fire sale of assets and a 65 percent rollover ratio of interbank debt Table 2.4. The simulation is meant to represent, in an admittedly stylized fashion, the liquidity squeeze that followed the credit event that the subprime mortgage market problems in the United States represented. Considering scenarios that compound different types of distress allows regulators to identify new sources of systemic risk that were previously undetected. Notice, for instance, that in our simulations, the combination of shocks increases the systemic role played by France as a provider of liquidity in addition to its importance as a recipient of funding: France now induces three hypothetical defaults compared with none under the credit shock scenario. Similarly, the United Kingdom and the United States substantially increase their systemic profile.

Table 2.2.

Simulation 1 Results

(Credit Channel)

article image
Source: IMF staff calculations.

Number of simulations in which that particular country fails.

Percentage of failures as a percent of the number of simulations conducted.

Notice also that the addition of the funding channel significantly raises the vulnerability of all banking systems, as measured by the hazard rate. This fact may help explain why numerous studies in the network literature–which focus mostly on credit events–have found little source of concern for the systemic effects resulting from hypothetical credit events. Explicitly quantifying the implications of a liquidity squeeze can alter the picture on systemic failures. For example, in our simulations, the hazard rate for most countries increases several fold. Table 2.5 features the distribution of capital losses after all contagion rounds have taken place. The fact that countries may contribute to further contagion rounds because of their inability to roll over their funding needs points to the need to consider the merits of interconnectednessbased liquidity charges. These potential riskbased charges could be assessed to institutions shown to be weakened by hypothetical liquidity squeezes. These risk–based charges could also be used for setting up a liquidity emergency fund for financial institutions, as some have proposed.17

Table 2.3.

Post-Simulation 1 Capital Losses

(Capital impairment in percent of pre-shock capital)

article image
Source: IMF staff calculations.
Figure 2.4.
Figure 2.4.

Network Analysis: Contagion Path Triggered by the U.K. Failure

Summing Up

Our illustration of network analysis has highlighted its usefulness as a surveillance tool. For instance, this section has shown how it could track the reverberation of a credit event and a liquidity squeeze throughout the system. To be sure, the unfolding of a crisis will be a function of institutions’ reactions and policy responses that could halt spillovers. Though not trivial, these elements can be added to the analysis going forward. Furthermore, although the chapter relied on aggregate BIS country banking data, central banks should consider assessing individual banking and other nonbank financial intermediary data to conduct this type of analysis. The analysis should be expanded to better track the systemic implications of liquidity squeezes such as the one witnessed in this crisis, since funding ofies can occur before balance sheet insolvency. The analysis can also be expanded by simulating multiple initial defaults, taking into account the currency composition of cross–border lending, and integrating factors such as the imperfect integration of global money markets, heterogeneous resolution regimes, problems with credit default swap (CDS) clearing mechanisms, and so on. Importantly, in this connection, when a crisis extends beyond one jurisdiction, the unraveling of defaults in multiple jurisdictions may become further complicated by the existence of several bankruptcy regimes that would impose additional constraints and ofies.

Table 2.4.

Simulation 2 Results

(Credit and Funding Channel)

article image
Source: IMF staff calculations.

Number of simulations in which that particular country fails.

Percentage of failures as a percent of the number of simulations conducted.

Table 2.5.

Post-Simulation 2 Capital Losses

(Capital impairment in percent of pre-shock capital)

article image
Source: IMF staff calculations.

The Co–Risk Model

The previous subsection featured a methodology well suited to analyze the systemic effects of financial institutions’ direct linkages, such as those typically generated in the interbank market. However, from a financial stability and risk management perspective, it may be equally critical to assess direct and indirect financial linkages at an institutional level, which may arise from exposure to common risks factors such as the adoption of similar business models (e.g., similar risk management systems or portfolio holdings), common accounting practices across financial institutions, the market’s perception of financial institutions’ coincidence of fortunes, and other factors. One method to extract this information consists of tracking the market’s perception, usually reflected in securities prices, of how the credit risk of one institution affects other institutions’ credit risk. As pointed out by Brunnermeier and others (2009, p. 5), “It may be that the best way to assess the implications of endogenous risk is via new endogenous co-risk measures that measure the increase in overall risk after conditioning on the fact that one bank is in trouble.”

The data at the core of most methodologies that estimate for co–risk (or co–movement) in the credit risk of financial institutions include institutions’ CDS spreads, Moody’s KMV expected default frequencies, corporate bond spreads, distance-to-default measures, and the value-at-risk (VaR) of their trading portfolio. Under efficient markets, co-movement of these variables should convey information on both direct and indirect linkages across financial institutions.

Importantly, the co-movements of financial institutions’ risk measures do not exhibit a linear pattern. That is, they increase more than proportionally with the increase in the level of risk. Therefore, analysts rely on a number of nonlinear methodologies to estimate these co-movements.18 One such methodology is extreme-value theory. Because of its focus on extreme (or tail) realizations, this methodology ignores the information content of a large portion of the data sample, a problem that becomes more acute the shorter the data sample.

This section presents an alternative to the use of explicit nonlinear models: quantile regression analysis. Most readers are familiar with standard regression analysis, which focuses exclusively on the mean relationship of the variables analyzed, and thus provides incomplete information about what transpires under distress periods (which, by definition, represent large deviations from the mean of the conditional distribution to a higher quantile (percentile). Quantile regression permits a more accurate estimation of the co–movements of financial institutions’ risk factors (or co-risk estimates), taking into account their nonlinear relationship, according to the methodology described in Box 2.2.19

The data for this analysis were compiled for the period from July 1, 2003 to September 12, 2008 and consist of daily five–year–maturity CDS spreads.20 Intuitively, when an institution’s CDS spreads are in their 5th quantile (the left tail of their distribution), this suggests that these institutions are experiencing an extremely benign regime, and when the CDS spreads are at their 95th quantile (the right tail of their distribution), this suggests a distress regime. The U.S. institutions analyzed are AIG, Bank of America, Bear Stearns, Citigroup, Goldman Sachs, JP Morgan, Lehman Brothers, Merrill Lynch, Morgan Stanley, Wachovia, and Wells Fargo; the European institutions are Fortis, Banque Nationale Paribas, Société Générale, Deutsche Bank, Commerzbank, BBVA, Banco Santander, Credit Suisse, UBS, Barclays, and HSBC; and the Japanese institutions are Mitsubishi, Mizuho, and Sumitomo.21

Quantile Analysis

This box describes a technique that examines how the default risk of an institution is affected by the default risk of another institution, after controlling for common sources of risk.

In statistical terms, the goal is to learn f(y|x,ß), the conditional distribution of the default risk of institution y, given the default risk of institutions and common default risks, denoted by x and where θ represents a set of parameters that needs to be inferred. Ordinary least squares (OLS) is a useful technique to extract this information. However, OLS can only provide information about the mean relationship across institutions’ default risk. Because this relationship is likely to be nonlinear, OLS has serious limitations.

Quantile regression is an alternative to other nonlinear models, or nonparametric models that can explain the apparent “nonlinearities” in the data. The nonlinearities of the data are, to a large extent, associated with the differential response of the dependent variable under seemingly “different” regimes, which can be associated with different quantiles. Quantile regression, first introduced by Koenker and Bassett (1978), extends the OLS intuition beyond the estimation of the mean of the conditional distribution f(y|x,ß). It allows the researcher to “slice” the conditional distribution at the quantile of interest, t, and obtain the corresponding crosssection of the conditional distribution ft(y|x,ß).

Quantile regression makes it possible to evaluate the response of the dependent variable within particular segments of the conditional distribution. Thus, in a quantile regression, the parameters are obtained by solving an optimization program that uses the entire sample. The parameters are obtained from the weighted minimization of the sum of residuals, yi–θ(xi, ß), where the weights are given by the function ρτ,


where y is the dependent variable, ξ(xi,ß) is a linear function with the parameters ß associated with exogenous variables xi, and ρτ (.) is a function that assigns weights to each observation dependdepending on the given quantile. More specifically, the function assigns a weight equal to the quantile t if the residual is positive and a weight equal to t–1 if the residual is negative. The minimization can be solved using standard linear programming methods, and the covariance matrices are usually estimated using bootstrap techniques that are valid even if the residuals and explanatory variables are not independent (Koenker, 2005).

In this chapter, following Adrian and Brunnermeier (2008), the model specification below is estimated using quantile regression:


where the credit default swap (CDS) spread of institution i, CDSi, is expressed as a function of the CDS spread of institution j, CDSj, after correcting for the effect of common aggregate risk factors (denoted by Rk), such as business cycle indicators and market volatility for different quantiles (t). Therefore, the parameter estimates, ßt, j, provide a measure of how firm j affects the credit risk of firm i (directly and indirectly) at different quantiles.

Furthermore, it is also possible to use the quantile regression estimated for the 95th quantile, e.g., the quantile that is assured to correspond to a distress period, to estimate a conditional co-risk measure analogous to the conditional value-at-risk measure introduced by Adrian and Brunnermeier (2008):

Conditional CoRisk(i, j) =


where CDSi(95) and CDSj are the CDS spread of institutions i and j corresponding to the 95th percentile of their empirical sample respectively, and a95, ß95, i, and ß95, j are the parameters of the 95th quantile regression.

In closing, by using the quantile regression technique and the co-risk measures, the tails of the distributions of defaults of pairs of institutions can be examined without ignoring important data influencing this relationship.

Note: Jorge Chan–Lau prepared this box. For more details on the quantile regression, see Chan–Lau, Espinosa, and Solé (2009b).

The set of independent variables include the following:

  • (1) A proxy for a general risk premium—computed as the difference between the daily return of the S & P 500 index and the threemonth U.S. treasury bill. At least in the United States, there is evidence that an increase in this spread is associated with increases in economywide default risk (see Vassalou and Xing, 2004; and Chan–Lau, 2007);

  • (2) The slope of the U.S. yield curve—measured as the yield spread between the 10-year and the three–month U.S. treasury rates (as proxy for a business cycle indicator);

  • (3) A LIBOR spread—measured as the one–year Libor spread over one-year constant maturity U.S. treasury yield (the spread is usually regarded as a measure of the default risk in the interbank market);22

  • (4) A proxy for the severity of liquidity squeeze—measured as the yield spread between the three-month general collateral repo rate and the three-month U.S. treasury rate; and

  • (5) The implied volatility index (VIX) reported by the Chicago Board Options Exchange, a common proxy for general risk appetite.

Figure 2.5 shows a scatter plot for the CDS spreads of AIG and Lehman from July 2003 to March 14, 2008. Notice that the scatter plot reveals a nonlinear relationship across the CDS spreads, thus suggesting the need for a nonlinear estimation technique such as quantile regression to extract co–risk measures. Figure 2.5 also presents the result of the quantile regression fit for AIG’s CDS spread as a function of Lehman’s CDS spreads, controlling for aggregate risk factors, and for different quantile (or percentile) levels, namely, the 5th quantile, the 50th quantile, and the 95th quantile. It is important to note that the codependence between the CDSs of AIG and Lehman Brothers, or co–risk, varies according to the regime. The slope of the quantile regression line becomes steeper the more distressed the regime is and indicates that co–risk is stronger during distress periods, a finding supported by earlier empirical studies.23

Estimated quantile regressions can be used to calculate conditional co-risk measures, described in detail in Box 2.2. From a risk management and regulatory perspective, conditional co–risk measures are more informative than unconditional risk measures because they provide a market assessment of the proportional increase in a firm’s credit risk induced, directly and indirectly, from its links to another firm. Furthermore, the more relevant conditional co-risk measures for regulatory and risk management purposes are the conditional co-risk measures under tail events. The measures presented here are estimated at the 95th quantile, which is a threshold commonly used in VaR analysis.

Figure 2.5 provides some intuition for how the conditional co–risk estimates reported in Table 2.6 were computed. Consider the case where Lehman Brothers’ CDS spread was 293 basis points.24 Plugging this value in the 95th quantile regression yields an estimated AIG CDS spread of 463 basis points. The observed 95th percentile CDS spread for AIG is only 225 basis points. With these elements, conditional co-risk measures can be obtained according to the formula in Box 2.2.

Figure 2.5.
Figure 2.5.

AIG and Lehman Brothers Default Risk Codependence

Sources: Bloomberg, L.P.; Primark Datastream; and IMF staff estimates.Note: This figure contains a scatter plot of the relationship between Lehman Brothers and AIG credit default swap (CDS) spreads. It also shows the quantile regression fit for the 5th, 50th, and 95th quantiles. In addition to information on CDS spread data, quantile regression estimates include the effect of additional common risk factors. In order to obtain a two-dimensional figure, it is necessary to keep these additional variables constant. Therefore, this figure is an approximate 2-D representation of the quantile regressors.

A subset of institutions is presented in Tables Table 2.6 and Table 2.7, in which the rows feature the percentage change in the conditional credit risk (i.e., increase in CDS spreads) endured by “locus” institutions and induced by “source” institutions listed in the columns, only when CDS spreads are high (at their 95th percent quantile). For instance, Table 2.6 shows that when Citigroup’s CDS spreads were at their 95th percent quantile, this would have led to an increase of 135 percent in Bear Stearns’ CDS spread. Similarly, the table shows that the credit risk of Lehman Brothers (listed in the sixth row in panel A) conditional on the risk of Citigroup (listed in the third column in panel A) is 103 percent higher than that corresponding to the 95th percentile of Lehman Brothers’ own CDS distribution, as estimated by the quantile regression, and so on.

As mentioned earlier, this type of analysis represents a useful surveillance tool, as it reveals which institutions are perceived to be more connected to each other. Figure 2.6 presents a graphical representation of some of the results in Table 2.6 The numbers associated with the outgoing arrows state the conditional co-risk measure, calculated from the 95th quantile regression, between the source and locus institutions. For instance, the risk of Bear Stearns conditional on the risk of AIG is 248 percent higher than that corresponding to the 95th percentile Bear Stearn’s empirical distribution.25

Figure 2.6.
Figure 2.6.

A Diagrammatic Depiction of Co-Risk Feedbacks

Sources: Bloomberg, L.P.; Primark Datastream; and IMF staff estimates.Note: This figure presents the conditional co-risk estimates between pairs of selected financial institutions. Only co-risk estimates above or equal to 90 percent are depicted. See Table 2.6 for further information.

Back in March 2008, these results Figure 2.6 would have suggested the need for closely monitoring AIG, Bear Stearns, and Lehman, given the markets’ perception of the considerable extent to which these institutions were affected by the fortunes of many of those in the sample of U.S. financial institutions during tail events. Interestingly, Table 2.6. indicates that in March 2008, the conditional co–risks from AIG and Lehman to the rest of the institutions in the sample were, on average (excluding Bear Stearns), 11 and 24 percent, respectively. And on September 12, 2008, these estimates jumped to 30 and 36 percent, respectively.

Table 2.6.

Conditional Co-Risk Estimates, March 2008

article image
Source: IMF staff calculations.Note: Each cell in the table reports the co-risk measure corresponding to the large complex financial institutions (LCFIs) listed in the rows (e.g., LCFI “A”) and conditional on the LCFIs listed in the columns (e.g., LCFI “B”). The co-risk measure of A conditional on B is calculated as the percent difference between A’s estimated credit default swap (CDS) spread and A’s observed CDS spread at the 95th empirical percentile. The estimated CDS spread of A is obtained by using B’s 95th empirical percentile CDS spread as an input in the 95th quantile regression of A on B. For instance, the co-risk measure of 39 percent for JPMorgan Chase & Co. conditional on Bear Stearns implies that the CDS spread of JPMorgan Chase & Co., at its 95th percentile value, increases by 39 percent if the CDS spread of Bear Stearns is at its 95th percentile value. The larger the co-risk measure, the more vulnerable is LCFI A to LCFI B.

The Distress Dependence Matrix

In the method above, each pair of institutions is examined as a pair (conditioning on a set of other general variables) and then one institution’s co-risk measure versus each of the others is averaged across the sample to see its connection to the “system.” Another, more encompassing, method of examining the relationships between a group of institutions, and then focusing on pairs of institutions, accounts for the relationship between the group of institutions implicitly by estimating a multivariate distribution of their asset returns as a first step. This multivariate density can capture linear (correlation) and nonlinear interdependence among all the financial institutions (due to the direct and indirect links) and the changes over the economic cycle. A general model for doing so is discussed more generally in Chapter 3.26 Having obtained this joint probability distribution of distress across a number of institutions, it is possible to then “slice” this multivariate distribution to estimate sets of pairwise conditional probabilities of distress. That is, it is possible to estimate the probability a financial institution experiencing distress conditional on another institution being in distress. We provide the collection of all such pairwise probabilities in the distress dependence matrix.

Table 2.8 shows the (pairwise) conditional probabilities of distress of the institution in the row, given that the institution in the column falls into distress, implicitly assuming the remaining institutions’ distress probabilities are also relevant.27 The matrix of bilateral distress dependencies can be computed daily to estimate how conditional probabilities of distress evolved. Three dates are chosen: a pre–crisis date (July 1, 2007); a month before (August 15, 2008); and then the day before Lehman Brothers filed for bankruptcy (September 12, 2008). As Table 2.8 indicates, distress dependencies signaled that the market expected that a default of Lehman would cause significant disruptions to the system. Specifically, the probability of default of any other bank conditional on Lehman falling into distress went from 22 percent on July 1, 2007 to 37 percent on September 12, 2008 (column–average Lehman). A similar effect in the system would have been caused by the distress of AIG, since the probability of default of any other bank conditional on AIG falling into distress went from 20 percent on July 1, 2007 to 34 percent on September 12, 2008 (column–average AIG). The results also suggest that up to a month before the Lehman event, distress dependencies were already signaling that a default of Lehman or AIG would have caused significant disruptions to the system.

Table 2.7.

Conditional Co-Risk Estimates, September 2008

article image
Source: IMF staff calculations.Note: Each cell in the table reports the co-risk measure corresponding to the large complex financial institutions (LCFIs) listed in the rows (e.g., LCFI “A”) and conditional on the LCFIs listed in the columns (e.g., LCFI “B”). The co-risk measure of A conditional on B is calculated as the percent difference between A’s estimated credit default swap (CDS) spread and A’s observed CDS spread at the 95th empirical percentile. The estimated CDS spread of A is obtained by using B’s 95th empirical percentile CDS spread as an input in the 95th quantile regression of A on B. For instance, the co-risk measure of 39 percent for JPMorgan Chase & Co. conditional on Bear Stearns implies that the CDS spread of JPMorgan Chase & Co., at its 95th percentile value, increases by 39 percent if the CDS spread of Bear Stearns is at its 95th percentile value. The larger the co-risk measure, the more vulnerable is LCFI A to LCFI B.

This is revealed by the probability of default of any other bank conditional on Lehman or AIG falling into distress, which increased significantly from 41 and 30 percent, respectively, on August 15, 2008 (column–average Lehman and AIG). These results are consistent with those found in the co–risk analysis.28

Table 2.8.

Distress Dependence Matrix

(Pairwise conditional probability of distress)

article image
Sources: Bloomberg L.P.; and IMF staff estimates.Note: This table shows the (pairwise) conditional probabilities of distress of the institution in the row, given that the institution in the column falls into distress.

Using equity options data and an alternative method for calculating multivariate distributions across groups of financial institutions (also featured in Chapter 3), a comparable exercise can provide a way to examine the relationship among two (or three) financial institutions at a time, against the backdrop of distress in a number of institutions.29 As an example, the risk perception of the three largest banks in both the United States and Europe is shown to have become more intertwined as their exposure to large common shocks has increased Figure 2.7 and 2.8.

Figure 2.7.
Figure 2.7.

U.S. and European Banks: Tail-Risk Dependence Devised from Equity Option Implied Volatility, 2006–08

(At-the-money, six months to expiration)

Note: The figure shows the trivariate extreme value dependence of implied volatility of equity options for U.S. and European banks.
Figure 2.8.
Figure 2.8.

Legend of Trivariate Dependence Simplex

(For Figure 2.7)

It is important to note that it would be inappropriate to base policy on information contained in any one method, even though the analysis above can provide useful insight of how distress in a specific institution can affect other institutions and ultimately the stability of the system. Nonetheless, policymakers should use these in combination with other concrete measures of systemic linkages to assist them in making decisions about individual institutions.

Finally, a levy of a capital surcharge based on the degree of interconnectedness could help align the incentives of the institutions’ management with those of the authorities in charge of safeguarding financial stability. For example, regulators could have used the information in Figure 2.6 or Table 2.8 to approach AIG, Bear Stearns, and Lehman to request a capital surcharge based on their significant exposure to the fortunes to other financial institutions.30 Furthermore, it also gives management incentives to reduce the institutions’ vulnerabilities to other institutions. For instance, vulnerabilities can be reduced by reducing direct counterparty exposures with other institutions or by adopting trading and/or asset allocation strategies different from those of other institutions. By differentiating itself, a financial institution can avoid spillovers from negative market sentiment. Furthermore, the more different financial institutions are, the less vulnerable they are to herd behavior and to common shocks, which makes the financial system more resilient to a liquidity crisis (Persaud, 2003).

The Default Intensity Model

The previous two subsections presented methodologies to extract the implications of direct and indirect systemic linkages for the U.S. banking system. However, from a financial stability perspective, it may be equally critical to assess indirect financial linkages, including those to the broader economy. For example, the failure of Lehman Brothers illustrates how the collapse of an institution can trigger distress in other entities through the complex web of contract relationships. At some point, however, it is not just the knock–on effects of individual institutions for the remaining institutions in the financial system that matter, but their interaction through their impact on the economy as a whole. This section features a reduced–form statistical model of the timing of banking default events drawn from Giesecke and Kim (2009), which is designed to capture the effects of direct and indirect systemic linkages among financial institutions, as well as the regime–dependent behavior of their default rates.31

Default Intensity Model Specification

This box presents a brief overview of the statistical default intensity model.

A sequence of economy–wide default times Tn represents the arrival times of defaults for the universe of Moody’s–rated companies. The value Nt is the number of defaults that have occurred by time t. The conditional default rate or intensity, measured in defaults per year, is denoted by λt. We follow Giesecke and Kim (2009) and assume that the intensity evolves through time according to the continuous time equation,


where λ0 > 0 is the value of the intensity at the beginning of the sample period, Kt = KλTNt is the decay rate with which the intensity reverts to the level ct = cλTNt at t, and J is a response jump process given by


where I(Tn ≤ t) = 1 if Tn ≤ t and 0 otherwise. The quantities K > 0, c ε(0,1), d > 0, and λ > 0 are constant proportional factors, satisfying c(1 + d) <1, to be estimated as described in Annex 2.1.

Equation (1) states that the default rate jumps whenever there is a default, reflecting the increase in the likelihood of further events. This specification incorporates the impact of a default on the surviving firms, which is channeled through direct and indirect systemic linkages. The magnitude of the jump depends on the intensity “just before” the event. This specification guarantees that the impact of an event increases with the default rate prevailing at the time of the event. Indeed, the impact of an event tends to be regime–dependent: it is often higher under generalized stress conditions. The parameter γ governs the minimum impact of an event. After the intensity is ramped up at an event, it decays exponentially to the level cλTNt with rate KλTNt

This model specification thus captures the regime–dependent behavior of default arrivals that can be estimated as described in Annex 2.1.

Note: Kay Giesecke prepared this box.

The model is formulated in terms of a default rate, or “intensity.” The details of the model are presented in Box 2.3. The default rate jumps at failure events, reflecting the increased likelihood of further events due to spillover effects. The magnitude of the jump is a function of the value of the default rate just before the event. This specification guarantees that the impact of an event increases with the default rate prevailing at the event, a property that is supported by empirical observation. Indeed, the impact of an event tends to be “regime—dependent:” it is often higher during a default clustering episode, when many firms are in a weak condition. The impact of an event dissipates over time.

This subsection estimates this model from historical default data spanning the period January 1,1970 to December 31, 2008, using the estimation procedure described in Annex 2.1. The data were obtained from Moody’s Default Risk Service.32

To provide some intuition about the type of default event analyzed in this section, Figure 2.9 shows the annual number of U.S. economy–wide and banking–wide default events, along with the corresponding trailing 12–month default rate. Figure 2.9 features the dramatic rise in defaults of Moody’s–rated banks during 2008. It is worth noting that while the absolute number of these defaults exceeds the number of events during the 1997–2001 “Internet bubble,” it is still below the number of defaults witnessed in the early 1990s.

Figure 2.9.
Figure 2.9.

Annual Number of Corporate and Banking Defaults

Source: Moody’s Default Risk Service; and Giesecke and Kim (2009).Note: Top panel shows annual number of default events in the universe of Moody’s-rated U.S. corporate issuers, along with the trailing 12-month default rate. Bottom panel shows annual number of default events in the universe of Moody’s-rated U.S. banking institutions, along with the trailing 12-month default rate.

The first step in estimating the probability of systemic–banking events consists of estimating the economy–wide default model discussed in Box 2.3. As described in Giesecke and Kim (2009), this model quite accurately captures the clustering of the economy–wide default events as represented by the fitted intensity Figure 2.10, thus suggesting the reliability of the model’s out-of-sample forecasts.

Based on this model, we use a sampling approach described in Annex 2.1 to estimate the banking-wide default rate for the universe of Moody’s rated issuers. Figure 2.10 depicts the time series of the quarterly one-year forecast of the banking-wide default distributions in the United States.

Figure 2.10.
Figure 2.10.

Actual and Fitted Economy Default Rates

(Number of defaults)

Source: Moody’s Default Risk Service; and Giesecke and Kim (2009).er year, versus the number of economy-wide defaults. The figure illustrates a good fit of the default timing model in replicating the time-series variation of economy-wide event times.

The tail of the forecasted distribution indicates the likelihood of systemic risk arising from both direct and indirect linkages. That is, a fat tail represents the likelihood of the failure of a relatively large number of banking institutions. This measure of the degree of systemic risk increased sharply during 2008, and already exceeds the levels seen during the Internet bubble, suggesting a high probability of further banking failures (see the date axis in the bottom right corner of Figure 2.11).

Figure 2.11.
Figure 2.11.

Default Rate Probability and Number of Defaults

(January 1998–January 2009)

Source: Giesecke and Kim (2009).Note: The figure shows a time series of quarterly forecast one-year distributions of the number of defaults in the U.S. banking sector, estimated from the fitted model for the banking-wide default rate.

The information contained in Figure 2.9 and 2.11 can be used to provide an indication of the potential future defaults that are still likely to take place as the current financial crisis continues to unfold. In particular, Figure 2.9 shows that the number of failures for the whole episode of the Internet bubble-burst was substantially higher than the number of defaults observed thus far. On the other hand, Figure 2.11 depicts a fatter tail (i.e., higher probability of a large number of defaults) for the current episode than for the Internet episode, thus indicating the high likelihood of further defaults in 2009 and beyond.

Finally, in order to provide a more precise metric of the potential system–wide and banking failures due to systemic linkages, the chapter considers the one–year 95 percent VaR of the distribution of default events for the economy at large and the banking sector to measure the number of Moody’s-rated corporates and bank defaults that would occur with a 5 percent probability, normalized by the number of firms in the pool at the beginning of each year since 1998 Figure 2.12. It transpires that during the 1998–2007 period the banking sector proved more stable than the economy as a whole. However, the sharp parallel increase in the economywide VaR and the bank–wide VaR suggests a break with the past feedback patterns, indicating that macro–financial linkages are now tighter, potentially complicating the policy response to the financial sector problems.

How Regulators Assess Systemic Linkages

Up to this point, the chapter has illustrated how four complementary methodologies could be deployed to assess systemic linkages. This section offers a brief overview of how some central banks rely on similar methodologies to assess systemic linkages, as a number of central banks have developed and implemented frameworks to assess cross-market and crossinstitution systemic linkages. The stage of development of these methodologies varies across countries, with many only conducting such analyses on an ad hoc basis. Several central banks are working on integrating the results of different methodologies with each other and with their broader macro-financial stability assessments.

Figure 2.12.
Figure 2.12.

Quarterly One-Year-Ahead Forecast Value-at-Risk at 95 Percent Level

(In percent)

Source: Giesecke and Kim (2009).Note: The figure shows the time series of quarterly estimates of the one-yearahead 95 percent VaR forecast of the number of defaults in the U.S. economy and the banking sector, normalized by the number of firms in the pool at the beginning of the year.

A number of central banks such as the National Bank of Belgium, Banco de México, Swiss National Bank, Deutsche Bundesbank, De Nederlandsche Bank, Oesterreichische Nationalbank, and the Bank of England conduct network analysis on a regular basis with a view to identifying institutions whose failure could have systemic implications. As mentioned in the previous section, the starting points of these analyses are banks’ large exposures and interbank credit activities. Relying on interpolation techniques, central banks construct domestic (and in some instances cross-country) exposure matrices that are used to analyze a series of hypothetical market and credit stress events, similar to the ones illustrated in the previous section.

For instance, Banco de México uses daily interbank exposures on loans, deposits, securities, derivatives, and foreign exchange operations to construct an interbank exposure matrix and carry out contagion exercises computing the effect of spillovers on the capital adequacy ratios (CAR) of other banks Figure 2.13. Thus, Banco de México is able to assess which institutions would see their CAR levels fall below specific thresholds as a result of systemic events.

Figure 2.13.
Figure 2.13.

Capital Adequacy Ratios (CARs) After Hypothetical Credit Shocks

(Number of banks)

Source: Banco de México.enario after all aftershocks are taken into account. The figure shows the number of banks up to 12 banks, but the full sample comprises 41 banks.

Most countries rely on more than one methodology to assess systemic linkages and differ on the degree to which they integrate them with other approaches. For instance, the Central Bank of Austria (Oesterreichische Nationalbank) has developed the systemic risk monitor model, which combines individual and systemic aspects of banks’ risk by integrating the impact of market and credit risk drivers for individual banks, and the risk of interbank contagion within the Austrian banking system Figure 2.14.33

Figure 2.14.
Figure 2.14.

Basic Structure of the Systemic Risk Monitor Model

Source: Central Bank of Austria (OeNB).

Similarly, the Monetary Authority of Singapore, Deutsche Bundesbank, and Banco de México combine detailed network analyses with an assessment of the risk implications of banks’ common exposures to different variables and sectors (i.e., an analysis reminiscent of the assessment of indirect linkages featured in the previous section of this chapter). In addition, the analysis of banks’ common exposure allows these regulators to conduct regular stress tests of their banking systems. De Nederlandsche Bank has developed cross–institution contagion models for both the banking and the insurance sectors. The latter allows for simulating the effects of insurer and reinsurer defaults on other institutions in the sector. De Nederlandsche Bank has also modeled the cross–sector correlations.

Some central banks have exploited the information extracted from their systemic linkages and codependence analyses to create several indicators of financial stability, such as the evolution of systemic risk under alternative loss–given–default (LGD) assumptions (carried out by the National Bank of Belgium), or the Deutsche Bundesbank’s diversification index.

Finally, some central banks, like the Bank of England, incorporate their systemic linkages analysis into a more ambitious macro–financial framework. Specifically, the Bank of England is developing the risk assessment model for systemic institutions (RAMSI) to sharpen its assessment of institution–specific and systemwide vulnerabilities Figure 2.15 (Aikman and others, forthcoming). RAMSI considers interbank linkages and macro–banking linkages by analyzing three areas of interconnectedness: funding feedbacks, asset fire sales, and a real sector–financial sector feedback loop. The analytical foundations of RAMSI draw from the stress testing literature—thus allowing the model to focus on credit risk—and from the network literature—thus enabling the model to consider the systemic effects of financial shocks.

Figure 2.15.
Figure 2.15.

RAMSI Framework

Source: Bank of England.

Several central banks have indicated that key data limitations exist for their analyses, including the fact that off-balance-sheet linkages (domestic and cross-border) cannot always be included in their interbank exposures matrix. Also, many central banks lack a comprehensive data set due to limited disclosure on complex structured credit products, and the challenges of collecting information on nonbank financial intermediaries (investment banks, insurance companies, hedge funds) and inaccurate measures of risk transfers. Furthermore, lack of consistency in information disclosures complicates risk exposure assessments, both across institutions and products. Thus, there is a distinct need for those overseeing systemic stability to receive more on- and off-balance-sheet data, including enough to assess cross-institutional linkages.

In addition, large–exposure data are reported on a quarterly basis in some countries. Having to rely on quarterly data constitutes another limitation in a world in which the liquidity situation of a bank may deteriorate very rapidly. Finally, some central banks have had ofies identifying the exact counterparty to a crossborder bank exposure. Typically, when there have been concerns about the potential risk stemming from this source, central banks have been able to identify it via additional communications with the relevant institution on an ad hoc basis.

Going forward, financial regulators should continue to develop ways to systematically collect and analyze these data. In addition, policymakers should give greater consideration to the hypothetical tail scenarios analyzed with these methodologies, lest they risk underestimating the probability of a tail event–a phenomenon that Haldane (2009) has dubbed “disaster myopia.” Moreover, the global dimension of the current crisis underscores the need to assess these exposures from a cross–border perspective, which would require further coordination and data sharing by national regulators. For example, the BIS is well suited to extend its data collection exercises to these data. The IMF could also play a role by analyzing such data in the context of its bilateral and multilateral surveillance roles.34

Basics of Over-the-Counter Counterparty Credit Risk Mitigation

A central counterparty (CCP) reduces systemic counterparty credit risk by applying multilateral netting. This box discusses key tools of over–the–counter counterparty credit risk mitigation, including netting and the collateralization of residual net exposures, and explains how a CCP reduces systemic counterparty risks.

An over–the–counter (OTC) contract is exposed to counterparty default risk prior to the contract’s expiration while it has a positive replacement value. In the absence of bilateral closeout netting, the maximum loss to a defaulted counterparty is equal to the sum of the individual contracts’ positive replacement values. The first figure shows two bilateral contracts. A owes B $5 on one contract, and is owed $10 from B on the second one. A faces a $10 loss if B defaults.1

Closeout netting aggregates all exposures between the counterparties, under a default, and contracts with negative values can be used to offset those with positive values. Hence, the total exposure associated with all contracts covered by the particular master agreement is reduced to the maximum of the sum of the replacement values of all the contracts and zero. A loses $5 if B defaults.2

The second figure shows contracts across four counterparties, all of whom have bilateral master agreements with each other that include bilateral netting. The numbers on the arrows indicate the net bilateral flow (A, B, C, and D, clockwise from the top left corner), and the below those letters indicates the maximum counterparty exposure for the counterparty. Thus, ED = $10, because both A and B owe D $5. Each counterparty faces a maximum counterparty default-related loss of either $5 or $10. C loses $10 if both A and D fail, and D is vulnerable to the simultaneous default of A and B. Hence, A and B should each provision against $5 of potential counterparty credit losses, and C and D should each provision for $10, for a total of $30, even though the maximum potential loss among all four is only $10.

Multilateral netting, typically operationalized via “tear–up” or “compression” operations that eliminate redundant contracts, reduces both individual and system counterparty credit risk. In this case, it could eliminate four contracts, eliminate all of A’s and B’s counterparty credit risk exposure, and leave C and D with $5 of maximum potential individual losses. The third figure shows the two possible post–netting configurations. The leftmost configuration eliminates the circular B → A → C → B flow, and replaces the B → D → C flow with a more direct B → C flow. The rightmost configuration just needs to eliminate the circular B → A → D → C flow. Using such tear–up operations, TriOptima’s TriReduce service eliminated about $30 trillion notional of credit default swap contracts in 2008.

A sound CCP takes the multilateral netting principle a step further, and reduces the likelihood of knock–on failures by requiring the participants to post margin, and by loss sharing among other clearinghouse members (see Box 2.5.). Other typical arrangements include capital funds comprised of clearing member contributions and accumulated profits and transaction fee rebates (see Bliss and Steigerwald, 2006).

Note: John Kiff prepared this box.1 See Bliss and Kaufman (2006) for more detail on OTC derivative collateral and netting. The figure assumes that the counterparties have signed a master agreement with the appropriate closeout provisions that covers both transactions. If they had not, B could “cherry pick” A by defaulting on its obligation to pay the $10, but insisting that A still pay the $5. In this case, A loses $15.2 The exposure can be further reduced by requiring counterparties to post collateral (cash and highly rated liquid securities) against outstanding exposures, usually based on the previous day’s valuations. See CPSS (2007) and ISDA (2007) for a survey of recent OTC derivative counterparty credit risk exposure practices, including collateral policies. See CRMPG (2005, 2008) for guides to best practices.

It is also important to mention that the crisis has brought to the fore the need to complement the ongoing stability analysis with key infrastructure changes. Among the most prominent efforts to mitigate over–the–counter counterparty credit risk has been the recent proposals for a central clearing party involving the netting and the collateralization of residual net exposures Box 2.4 and 2.5. This effort has centered on CDS exposures, but could be extended to other over–the–counter products when enough standardization is present.

Table 2.9.

Summary of Various Methodologies: Limitations and Policy Implications

article image
Source: IMF staff.

Giesecke and Kim (2009).

Segoviano and Goodhart (2009).

Model can use probabilities of default estimated from alternative methods, not only credit default swap (CDS) spreads.

Policy Reflections

The current crisis reminds us that interconnectedness across institutions is present not only within the banking sector, but as importantly, with the nonbank financial sector (such as investment banking, hedge funds, etc.). Specifically, the liquidity problems have demonstrated that rollover risk can spill over to the whole financial system, thus requiring a better understanding and monitoring of both direct and indirect linkages.

This chapter presented four complementary methodologies to assess potential systemic linkages across financial institutions Table 2.9. The chapter has argued that there is a need to deepen our understanding of these linkages and suggested how more refined versions of these complementary models could be used to strengthen surveillance and policy discussions such as the perimeter of regulation. The task is complicated by several factors: the ofies in securing information on cross-institution exposures, especially across borders, due in part to confidentiality agreements; the imperfect integration of global money markets arising partly from heterogeneous resolution regimes; the ofies in securing information on off-balance-sheet exposures and opacity in assessing counterparty risk; and problems with CDS markets, requiring clearing mechanisms.

The chapter has argued that in addition to the ongoing efforts to mitigate counterparty credit risk, including through the mutualization of counterparty risk in a clearing facility, more attention should be paid to the systemic implications of liquidity squeezes and other stress events. The goal of the chapter has not been to provide figures associated with some level of systemic linkages. Rather, a key goal has been to feature the type of specific methods that authorities could use to concretely discuss the too-connected-to-fail problem. The chapter helps to inform policy initiatives, including in the areas of information gaps and the perimeter of regulation.

Information gaps. The chapter illustrates the importance of gathering data and monitoring cross–market and cross–country linkages and how this could assist a country’s supervisory and surveillance efforts.

  • The chapter showed, for example, how information on systemic linkages could help with questions such as the merit of capital charges based on counterparty risk systemic linkages or of limiting an institution’s exposures. For instance, the co–risk measures or the distress dependence matrix can be used to assess the relative importance of individual institutions and could form the basis for a higher capital charge or bilateral exposure limits. After all, market discipline is more likely to work when investors know that institutions will not be bailed out, which can only be credible when they are not too connected to fail.

  • Globalization means that it is close to impossible for a country by itself to undertake effective surveillance of potentially cross–border systemic linkages. Therefore, enhancing our understanding and monitoring of global systemic linkages requires stronger informationsharing agreements.

Perimeter of regulations. The chapter also provides a potential approach to consider how to maintain an effective perimeter of prudential regulation without unduly stifling innovation and efficiency. The chapter illustrates how network models should allow regulators to see which institutions are affected in subsequent rounds of spillovers and thus determine relative levels of supervision. Such an assessment would have to be conducted at regular intervals, as the structure of the network is likely to change over time. Similarly, the co–risk models or the distress dependence relationships can help policymakers better regulate institutions, such as how to design capital surcharges to lessen the too-connected- to fail problem.

In sum, monitoring global systemic linkages will undoubtedly become increasingly relevant, and thus the development of reliable tools for this task should proceed expeditiously. Going forward, the IMF can and should assume a more prominent global financial surveillance role, but in addition to strengthening its understanding of systemic linkages, it will need to improve its gathering of relevant data. New informationsharing agreements on cross–border financial exposures (including regulated and unregulated products and institutions) could strengthen the capacity of IMF members to provide it with the relevant data. In principle, such agreements could operate on multilateral or bilateral bases and would ideally address both the domestic and cross-border dimensions. Information-sharing agreements will be effective to the extent that country authorities can collect additional data in order to monitor systemic risk. Such a data collection exercise should be prioritized based on a cost–benefit analysis but it should include at the very least, off–balance–sheet exposures and information on complex products.

A Central Counterparty as a Mitigant to Counterparty Risk in the Credit Default Swap Markets

This box discusses key features of a well–designed central counterparty (CCP), aspects particular to a credit default swap (CDS) CCP, and the factors for choosing between multiple CCPs versus a single CCP.1

A CCP facilitates standardization and multilateral netting, increases liquidity, and can improve the availability of price information, increasing the ability to value CDS products, and ultimately serves to mitigate risk. A CCP for standardized CDS contracts can reduce operational risks, especially those inherent in over–the–counter trades, such as backlogs of outstanding confirmations and unwinding positions in case of default that can spread across multiple counterparties. In addition, the mutualization of risk among clearing members provided by a CCP reduces hedging costs by eliminating the need for hedging bilateral exposure.

The lack of transparency about the net counterparty exposure in the CDS market can inflate the public perception of counterparty risk. For example, if the market had known in advance that the settlement of Lehman swaps would amount to only $5.2 billion of net funding obligations in the CDS market, according to the Depository Trust and Clearing Corporation, instead of the hundreds of billions in notional that were speculated, the financial markets might not have seen the same degree of turmoil in the fall of 2008. Thus, greater insight into CDS trading activity could reduce the uncertainties characteristic of the recent crisis.

Risk Management: Margining, Collateral, and Membership Requirements

While a CCP mitigates counterparty risk, it also concentrates risk and requires extensive risk management systems. Consequently, a CCP’s risk management processes, internal controls and operational risk procedures, and the adequacy of its back-up financial resources are key to ensuring that risks are contained. In addition, a CCP that clears CDS contracts should conduct stress tests with relevant shocks to its members.

A CCP typically uses margining as an instrument to reduce counterparty credit risk. Initial margin, the amount required to initiate a position, and variation margin, payments for the daily losses and payoffs for daily gains, are required to keep a position open. This allows payment flows to account for intra-day price movements and variation margin changes to account for end-of-day settling up, since variation margin is based on daily mark-to-market pricing; positions are liquidated if variation margin cannot be met. Riskier instruments should incorporate larger margins to account for the greater risk to which the CCP is exposed.

Margin requirements for less liquid instruments should incorporate the potential losses that might occur over a longer liquidation period following a default. Margining requirements should therefore account for risks of a particular product and elements such as sector risk and liquidity risk. The accurate calculation of margin requirements, or even an appropriate range of margin requirements, will be a key challenge to the new CDS CCPs due to the complexities in the pricing of these particular products.

Cash Settlement versus Physical Settlement in a CDS CCP

A CCP can facilitate settlement of contracts after an event of default. For credit derivatives contracts, there has been a decline of physical settlement in favor of cash settlement, and the use of International Swaps and Derivatives Association (ISDA) auction protocols have become standard practice in credit events for the reasons cited below.

A feature of the CDS market is the settlement method in case of default, or credit event. With the occurrence of a credit event, there are two options for the settlement of CDS contracts—physical settlement or cash settlement.2 In the case of physical settlement, the protection buyer delivers the debt obligation (the cash instrument) of the reference entity and in return is paid the par value by the protection seller. In cash settlement, the protection seller pays the protection buyer the difference between par value and the market value of the debt obligation of the reference entity. However, the growth of the CDS market has resulted in a much larger notional value of CDS contracts than the outstanding value of the debt obligations. Cash settlement avoids possible failure in physical delivery due to a shortage in deliverable cash instruments.3

In light of the concentration of risk in a CCP, a smoothly operating settlement system is crucial for reducing any potential systemic consequences. Central counterparties’ use of cash settlement for CDS contracts would deter market manipulation and help avoid disruption in the settlement process. In March 2009, ISDA initiated its Auction Settlement Supplement and Protocol incorporating cash auctions into standard documentation for settling CDS contracts, i.e., “hardwiring” the ISDA settlement protocol into the contracts. While the ISDA–defined protocol provides for both auction and physical settlement, cash settlement can benefit by minimizing price distortions. However, maximizing participation in the industry standard settlement mechanism for all CDS contracts is crucial.

Multiple CCPs versus a Single Central Counterparty

The CDS CCP ventures based in the United States and Europe have engendered some debate as to the optimal number of central counterparties.4 A single CCP would accomplish the largest reduction in systemic counterparty risk, benefit from economies of scale and a larger pool of counterparties and resource base, and limit opportunities for regulatory arbitrage and competitive distortions.5 The resulting concentration of operational risk would necessitate strong risk management processes and oversight. The U.S. approach is to allow for multiple CCPs, allowing market forces to determine the optimal number of CCPs in order to assure clearing services are provided efficiently. However, there are concerns that such an approach will be a “race to the bottom,” as each CCP fights for market share by economizing on risk management procedures, and lowering margining requirements and contributions to a guarantee fund.6 From a cross–border perspective, the systemic importance of a single CDS central counterparty for a domestic economy might lead authorities toward retaining the CCP under national regulatory and supervisory oversight for the ability to control or mitigate the impact on domestic financial stability. National authorities might be reluctant to oversee a global entity where jurisdictional disputes may arise. Nevertheless, a global CDS CCP would mitigate the most overall counterparty risk. Thus, if a global CDS CCP is not established, then the development of separate CCPs should provide for the crossborder coordination of regulatory and supervisory frameworks to avoid regulatory arbitrage. These frameworks should ensure that linkages and clearing mechanisms are established across CCPs, without constraining the use of multiple currency transactions. At present, there are various legislative, regulatory, and market proposals outstanding to deal with counterparty clearing organizations, which may affect issues such as the standardization and documentation of credit default swaps, and the responsibilities of counterparties and clearinghouse members, among others.

Note: Jodi Scarlata prepared this box.1 For further discussion, see CPSS (2004, 2007).2 A CDS credit event is a default event that results in payments by the protection seller to the protection buyer, concurrent with delivery requirements by the protection buyer. Typical credit events include bankruptcy of the reference entity or its failure to pay with respect to its bond or debt and, for some reference entities, restructuring.3 To note, the notional amount of single–name CDS far exceeds notional of physical cash bonds and can be potentially distorting. Bank for International Settlements data show CDS notional outstanding of around $57 trillion at end–June 2008 versus a gross market value of underlying securities of only $3.2 trillion for the same period. Further, a physical settlement could result in a short squeeze, as protection buyers purchase bonds to deliver for settlement, bidding up the bond price and thereby offsetting the gains on the CDS protection.4 These include CME Clearing, Eurex Clearing, ICE Trust/ICE Clear Europe, and NYSE Liffe/LCH. Clearnet.5 See Duffie and Zhu (2009) for discussion.6 A guarantee fund compensates nondefaulting participants from losses suffered in the event of another participant’s failure to meet its obligations to the CCP.

Annex 2.1. Default Intensity Model Estimation35

This annex discusses the likelihood estimation of the default intensity model’s parameters.

The vector parameter to estimate is denoted by θ = (k, c, δ, γ, λ0). The data consist of observations of economy-wide default times, Tn during the sample period [0, t], which represents daily data from the time period January 1970 to December 2008. The maximum likelihood problem for the default rate λ = λθ is given by


where Θ is the set of admissible parameter vectors. For the model in Box 2.2., the likelihood in equation (1) can be calculated in closed form (see Giesecke and Kim, 2009).

The parameter estimates are as follows. The initial default rate at the beginning of the sample period in January 1970 is λ0 = 32.56 events per year. At an event, the default rate jumps by δ = 0.13 times the default rate just before the event. The minimum jump size is λ = 0.59 events per year. After an event, the default rate decays with time at rate κ = 0.11, to a level that is equal to c = 0.018 times the intensity at the previous event.

The model fits the event data. This is indicated by Figure 2.6, which contrasts the fitted intensity with the observed economy–wide defaults during the sample period. The fitted intensity replicates the substantial time–series variation of economy–wide event times. Giesecke and Kim (2009) provide formal statistical tests that can be used to assess the model’s in–and out–of–sample fit.

The fitted model determines the conditional distribution of the number of economy–wide defaults during any future time period. This distribution is estimated by a Monte Carlo simulation of events. Here, arrivals over the forecast period are generated and used to calculate the corresponding empirical distribution. To obtain the distribution of events in a given sector, an additional step is needed: randomly assign a sector to each simulated economy–wide event time. A sector s ∈S = {1,2,…12} is selected with probability



Here, Nτ is the number of defaults observed during the sample period and Sn∈S is the observed sector of the nth defaulter. More weight is assigned to recent observations, i.e., events that occur closer to the end of the sample period. With this procedure, the predictive power of default events is exploited even when they are associated with firms outside of the given sector.


  • Adrian, Tobias, and Markus K. Brunnermeier, 2008, “CoVaR,” Staff Report No. 348 (New York: Federal Reserve Bank).

  • Aikman, D., P. Alessandri, B. Eklund, P. Gai, E. Martin, S. Kapadia, N. Mora, G. Sterne, and M. Willison, forthcoming, “Funding Liquidity Risk in a Quantitative Model of Systemic Stability,” in Financial Stability, Monetary Policy, and Central Banking, Central Bank of Chile Series on Central Banking Analysis and Economic Policies, Vol. 14.

    • Search Google Scholar
    • Export Citation
  • Allen, Franklin and Ana Babus, 2008, “Networks in Finance: Network-based Strategies and Competencies, Chapter 21,” Working Paper No. 08-07 (Philadelphia: Wharton School).

    • Search Google Scholar
    • Export Citation
  • Allen, Franklin and Douglas Gale, 2000, “Financial Contagion,” Journal of Political Economy, Vol. 108, No. 1, pp. 133.

  • Azizpour, Shariar, and