CoMap: Mapping Contagion in the Euro Area Banking Sector
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 3 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

Contributor Notes

This paper presents a novel approach to investigate and model the network of euro area banks’ large exposures within the global banking system. Drawing on a unique dataset, the paper documents the degree of interconnectedness and systemic risk of the euro area banking system based on bilateral linkages. We develop a Contagion Mapping model fully calibrated with bank-level data to study the contagion potential of an exogenous shock via credit and funding risks. We find that tipping points shifting the euro area banking system from a less vulnerable state to a highly vulnerable state are a non-linear function of the combination of network structures and bank-specific characteristics.

Abstract

This paper presents a novel approach to investigate and model the network of euro area banks’ large exposures within the global banking system. Drawing on a unique dataset, the paper documents the degree of interconnectedness and systemic risk of the euro area banking system based on bilateral linkages. We develop a Contagion Mapping model fully calibrated with bank-level data to study the contagion potential of an exogenous shock via credit and funding risks. We find that tipping points shifting the euro area banking system from a less vulnerable state to a highly vulnerable state are a non-linear function of the combination of network structures and bank-specific characteristics.

“The financial crisis really was a stress test for the men and the women in the middle of it. We lived by moments of terror. We endured seemingly endless stretches when global finance was on the edge of collapse, when we had to make monumental decisions in a fog of uncertainty, when our options all looked dismal but we still had to choose” (Geithner, 2014: 19).

1. Introduction

The collapse of Lehman Brothers has been the defining event of the Great Financial Crisis of 20072008. While the size of its balance sheet alone did not foreshadow the sequence of events that followed, surely, the uncertainty stemming from its default left market participants in panic of a widespread contagion. Regulators with limited information about its degree of interconnectedness and bilateral exposures faced the true dilemma: let it fail or save it. Lehman was allowed to default and the counterfactual outcome would be debated for years to come.

To focus the spotlight on the interconnectedness and how it might contribute to systemic risk, we develop a contagion mapping methodology (CoMap) applied to the euro area Significant Institutions’ (SIs) network of large exposures within the global banking system. On the basis of supervisory reporting of large bilateral exposures, we construct arguably the most comprehensive euro area network of bilateral linkages to date and combine it with granular bank balance sheet information to capture bank-specific characteristics and related (regulatory) solvency and liquidity constraints.2

The CoMap methodology estimates contagion potential due to credit and funding risks via bilateral linkages. The main objective is to assess the amount of losses and number of defaults an exogenous shock to a bank induces to the system. In achieving this, the CoMap methodology evaluates first-round effects (direct losses) and subsequent-round effects (indirect losses) due to cascading defaults and potential fire-sale losses.

We then formulate contagion and vulnerability scores capturing counterparty credit and funding risks of an exogenous default shock so as to rank banks in terms of contribution to euro area systemic risk and their degree of fragility, respectively. The outcome is a practical policy tool that can be updated on a quarterly basis to map contagion risks stemming from within and outside the euro area banking system. Overall, the paper provides unique insights on the interplay of banks’ characteristics and the topology of the euro area interbank network.

Specifically, the methodology allows for taking a more granular, heterogeneous and holistic approach to the euro area banking system’s study of contagion risk. Thus, we cover 199 consolidated banking groups (of which 90 from the euro area) as of September 2017 and track their bilateral debt, equity, derivative and off-balance sheet exposures that are in total larger than 10% of a bank’s eligible capital. We then incorporate bank heterogeneity by calibrating model parameters using exposure-specific information on pledged collateral, credit risk mitigation and maturity structure as well as bank-specific pool of HQLA and non-HQLA assets and capital requirements. Overall, the large exposures dataset cover in aggregate 90% of euro area banks’ RWAs vis-à-vis credit institutions for a total amount of EUR 1.4 trillion and EUR 680 billion, respectively in gross and RWA terms. However, it is important to note that the global network we construct takes a EA-centric perspective consistent with the underlying data on bilateral linkages based solely on EA banks’ reports. While we overcome this limitation by incorporating data on largest funding sources on banks’ liabilities, outward spillovers from EA banks to non-EA banks might still be substantially underestimated.

Furthermore, we calculate the contribution of amplification effects (beyond the initial loss) to the overall losses induced by a bank’s default or distress (amplification ratio), and we derive a sacrifice ratio indicator assessing the cost-benefit trade-off of a bank-bailout. Finally, we illustrate how our framework can be used to run counterfactual simulations showing how contagion risk can be reduced by fine-tuning prudential capital and liquidity measures.

Key findings highlight that the degree of bank-specific contagion and vulnerability depends on network-specific tipping points affecting directly the magnitude of amplification effects. It follows that the identification of such tipping points and their determinants is the essence of an effective micro and macro prudential supervision. In this respect, shifts in the network structure are associated with non-linear changes in contagion, even more so when these shifts take place in tandem with variations in banks’ solvency or liquidity characteristics. In a variety of tests, heterogeneity in the magnitude of bilateral exposures and of bank-specific parameters, which can only be captured with granular supervisory data, is detected as a key driver of the total number of defaults in the system. We also show that international spillovers coming from non-euro area banks are an important channel of contagion for the euro area financial system.

Overall, this paper aims at overcoming some of the data and modelling gaps in the interconnectedness literature by studying the degree of contagion and vulnerability of euro area SIs within the global banking system. In overcoming this challenge, we exploit the actual topology of the euro area interbank network of large exposures and account for the heterogeneous characteristics of individual banks via a set of bank and exposure specific parameters retrieved and calibrated on ECB proprietary supervisory data. This comprehensive data infrastructure allows us to build a detailed modelling framework capturing the specificities of prudential regulations such as minimum capital requirements, macroprudential capital buffers, the liquidity coverage ratio and large exposure limits and their interplay with credit, funding and fire-sale risks.

The remainder of the paper is organized as follows. Chapter 2 provides a review of how the literature has evolved. Chapter 3 presents the data infrastructure and details the topology of euro area interbank network of large exposure. Chapter 4 presents the Contagion Mapping (CoMap) methodology and provides insights on the calibration of the model parameters. Chapter 5 discusses the results on based on the contagion and vulnerability scores. Chapter 6 provides a multitude of sensitivity tests to determine non-linearities and to check robustness of the results. Chapter 7 takes a policy perspective showing how a range of macroprudential tools can be finetuned to reduce contagion and derives policy implications. The last chapter concludes.

2. A Review of Network Analysis Literature

In the last decade, significant progress has been made in studying the growing interconnectedness of the global financial system and how shocks are amplified or mitigated depending on the network topology and the heterogeneity of the agents. However, up to now, uncertainty surrounding the network due to the lack of available information still represents the major challenge policy-makers and researchers face in order to assess the potential cascading effects of such an event. This is the fundamental question, as highlighted in the incipit of this paper, policy-makers and regulators must answer and be prepared for in case such an adverse event takes place.

Motivated by this question, the systemic risk literature has evolved along two tracks since the seminal work of Allen and Gale on financial contagion (2000). The first group of studies try to get around the problem of limited information by relying on market data such as Acharya et al. (2012, 2017), Billio et al. (2012), Diebold and Yilmaz (2014) and Cortes et al. (2018) among others. These market data-based studies allow for capturing financial institutions’ interconnectedness and to build systemic risk indices in real time by exploiting high frequency information on co-movements of stock prices or CDS spreads. Nevertheless, the interpretation and identification of the underlying mechanism generating the comovements may be difficult (Glasserman and Young, 2016). Moreover, the VAR approaches used to estimate variance decomposition for the forecast errors suffer from high-dimensionality problems, limiting the analysis to small samples of banks (Alter and Beyer, 2013; Diebold and Yilmaz (2009, 2012). Only recently, Demirer, et al. (2017), Basu et al. (2017), Moratis and Sakellaris (2017) manage to estimate a high-dimensional network using LASSO methods or Bayesian VARX models. Although recent innovations in estimation techniques have allowed for larger sample sizes, these approaches still cover only a fraction of the banking system, as information on CDS and stock prices is limited to listed companies and the quality of analysis is highly reliant on depth of market trading. Furthermore, this branch of the literature does not allow to directly model the interplay of prudential regulations and systemic risk since the former are only implicitly captured in the degree of co-movements of bank market prices.

Another stream of the interconnectedness literature hence exploits bilateral exposures and uses bank balance-sheet based methodologies.3 This approach allows for studying the underlying mechanism of systemic risk formation and contagion stemming from structural features of the network, the heterogeneity of the agents, the sources of risk, and their interplay. In general, balance sheet based studies have tended to focus on a few specific features so as to better disentangle the path of contagion and amplifications effects due to credit risk (Eisenberg and Noe, 2001; Rogers and Veraart, 2013), funding risk (Gai and Kapadia, 2010; Gai et al. 2011), cross-holdings of assets and fire sales (Caballero and Simsek, 2013; Caccioli et al. 2014; Cont and Schaanning, 2017), as well as from multilayer networks (Bargigli et al., 2015; Kok and Montagna, 2016). Overall, these approaches are more theory-based than empirical since they aim to provide insights on the properties of the network and their implications for financial stability than actually construct contagion and vulnerability indicators for a systemic risk assessment as in the market-based approaches.

This is due, among other things, to the lack of availability of a complete set of bilateral exposures which undermines the accuracy of such systemic risk indicators. In this respect, most of the empirical literature tends to focus on specific market segments (e.g., overnight or repo markets, etc), or they are country-specific such as studies on the Austrian, German, Dutch and Italian interbank markets (Purh et al. 2012; Craig and von Peter, 2014; Craig et al. 2014; Veld and Van Lelyveld, 2014; Bargigli et al., 2015). Other studies try to compensate for the lack of network data by imputing missing bilateral linkages based on a maximum entropy technique, originally proposed by Sheldon and Maurer (1998). Their work has inspired many others to develop even more sophisticated versions, such as minimum entropy (Degryse and Nyguyen, 2007; Elsinger et al., 2006; Upper, 2011), relative entropy (Van Lelyveld and Liedorp, 2006), or by generating random networks consistent with partial information (Halaj and Kok, 2013; Anand et al., 2014). Overall, as emphasized by Glasserman and Young (2016) empirical work in this field was limited by the confidentiality of interbank transactions and the incomplete set of information on bilateral exposures. Moreover, these studies focused on rather standard network measures such as degree centrality, eigenvector centrality and pagerank algorithms to assess financial system vulnerabilities and systemic importance of banks.

Additionally, as access to confidential supervisory data is granted at national level, most empirical analyses tend to be country-specific. This resulted in the lack of a comprehensive analysis of crossborder financial exposures, thereby missing bi-directional linkages with institutions outside a country’s jurisdiction. To a certain extent, Garratt et al. (2011) and Espinoza-Vega and Sole (2010) overcame this challenge by using aggregate-level International Consolidated Banking Statistics database from BIS to assess the cross-border credit and funding risks of a banking system’s default on another country’s banking system. In a recent assessment, IMF’s 2018 Euro Area FSAP (IMF, 2018) appraises contagion risks for the euro area banking system based on large exposures reporting of 25 largest euro area banks with their intra- and extra-EA counterparts. However, none of these studies account for heterogeneity across banks thereby ignoring the added value that a specific distribution of exposures and bank-specific characteristics may bring to the overall stability of the system.

Against this background, we contribute to the literature mainly in 5 directions. First, we construct a selection of contagion and vulnerability indicators assessing the systemic footprints of banks. These model-based estimates allow us to conduct welfare analysis trading off systemic losses due to bank failures and the cost of policy interventions. Second, we provide a calibration benchmark for parameters capturing three types of risks: credit, liquidity and fire sales. Third, we uncover evidence of how liquidity risk is the major source of default in the interbank network of large exposures. Fourth, we perform stress test scenarios to assess resilience of the network structure to large macro shocks. Fifth, we perform sensitivity analyses to changes in model parameters to assess the non-linear effects derived by the interplay of network structure and banks’ characteristics. Finally, we provide counterfactual exercises of prudential measures and their possible usages in reducing the vulnerability of the network.

3. A Novel Database on Bank Interlinkages

The Great Financial Crisis (GFC) of 2008 has led to a rethinking and strengthening of banking and financial regulation worldwide. These efforts culminated in the Basel III standards, the new legal framework aimed at shaping a safer financial system. This process led to the development of macro and micro prudential regulatory requirements enhancing banks’ capital and liquidity standards as well as defining leverage and large exposure limits.4 In this regard, the large exposures regulation was introduced as a tool to limit the maximum loss a bank could potentially incur in the event of a sudden counterparty failure so as to complement the existing risk-based capital framework (pillar 2), and to better deal with micro-prudential and concentration risks (BIS, 2014).5 Accordingly, banks are required to report to prudential authorities detailed information about their largest exposures.

The focus and novelty of this paper revolve around exploiting large exposures reporting in constructing a euro area interbank network, which can be used to map contagion and track systemic risk embedded in the network. The large exposure reporting represents, to our knowledge, the most comprehensive and updated (on a quarterly basis) dataset capturing granular bank and exposure level-information of the euro area banking system vis-à-vis entities located worldwide covering all economic sectors: credit institutions (CIs), financial corporation (FCs), non-financial corporations (NFCs), general governments (GGs), central banks (CBs) and households (HHs). In this paper, however, we focus primarily on the exposures vis-à-vis other credit institutions i.e. the interbank network of large exposures.

Nevertheless, there exist several barriers to utilizing this supervisory data in network analysis. Because of the confidential nature of this data, the access is generally restricted to banking supervisors and central banks. However, even for those with access to these reports, transforming raw data into a suitable format for network analysis is a laborious task with many challenges. ECB is in a unique position where the supervisory data from member states are centrally accessible for monitoring purposes. While this wealth of information promises high potential, it is a colossal undertaking to reconcile this data across many jurisdictions and set up a euro area banking network of large exposures for a comprehensive systemic risk assessment.

The primary sources of data underlying our analysis are the statistics reported by euro area banks under the common reporting (COREP) and financial reporting (FINREP) frameworks.6 Counterparty-level data as well as other bank-level variables are obtained from a wide selection of templates under these reporting frameworks. The analysis focuses on a snapshot as of September 2017, consistent with the latest available supervisory data at the time of this exercise. The secondary sources of data are Bankscope and Bloomberg, which were used, respectively, to obtain bank-level variables of non-EA banks and to fill information gaps in reconciling the entities in the bank network.

3.1 Large Exposures

An exposure is considered a “large exposure” if, before applying credit risk mitigations and exemptions, it is at least 10% of an institution’s eligible capital vis-à-vis a single client or a group of connected clients (CRR, art. 392).7 Moreover, institutions that report FINREP supervisory data are also required to report large exposures information with a value above or equal to EUR 300 million. This results in an extensive sample coverage capturing almost EUR 13.5 trillion of gross exposures as of September 2017 (our reference date), more than 50% of euro area credit institutions’ total assets. In risk-weighted terms the coverage is smaller but still comprehensive, capturing almost 40% of the total RWAs of euro area banks. However, in terms of studying the euro area interbank network, which is the subject of our paper, the large exposures sample captures 90% of euro area banks’ RWAs vis-à-vis credit institutions. This extensive coverage gives us a high level of confidence that we can reliably model euro area banks’ degree of interconnectedness and their contribution to cross-sectional systemic risk.

It is also notable that the large exposures data go well beyond the standard unsecured interbank transactions typically covered in many interbank network studies. In fact, in the supervisory reporting, a large exposure is defined as any direct and indirect debt, derivative, equity, and off-balance sheet exposure that complies with the reporting threshold.8 In this regard, a key feature of the regulation is that the counterparty may be identified not only as an individual client, but also as a group of connected clients (CRR, art. 4:1:39).9 The latter refers to the fact that the reporting institution needs to assess and take into account not only isolated risks but also possible domino effects and negative externalities from funding shortfalls due to control relationships and economic interdependencies (EBA/GL/2017/15). This is a highly relevant feature and a value-added of the data because large exposures allow us to capture the consolidated exposures at risk. Moreover, in achieving this, a standardized evaluation method is applied so that the totaling final exposure amount is reliably comparable across countries and reporting institutions.10

3.2 Dataset

This subset of large exposures data captures almost completely credit and funding risks of euro area SIs among themselves, and credit risks of EA SIs vis-à-vis non-euro area banks. However, the large exposures dataset does not capture euro area SIs’ funding risks from non-euro area banks. In order to address this information gap, we retrieve data on the 10 largest funding sources of euro area SIs by using another COREP supervisory template defined as concentration of funding by counterparty (C.67).11 This template is encumbered by the same mapping problem as the large exposures template and therefore we also apply here the Stata Mapping Code. These 10 largest funding sources may come from central banks, governments, credit institutions, and corporates. Regarding, our sample of counterparties we find 41 funding exposures from non-euro area banks towards euro area SIs. We incorporate them into the large exposures dataset matching them with the gross exposures before exemptions and credit risk mitigations. Next, we consolidate the large exposures data from euro area SIs that are euro area-based subsidiaries of non-euro area banks with these 41 funding sources from non-euro area banks.12 We then drop intra-group exposures and set a 50 million threshold for exposure before credit risk mitigations (but after exemptions) to filter out small exposures.

On top of this, for modelling purposes, we retrieve granular information from other COREP and FINREP supervisory templates regarding euro area SI’s various financial indicators and ratios.13 Whereas, for the international banks we match our data set to Bankscope for bank characteristics, and when missing, we use the most recent annual consolidated financial report.14 In the end, we limit our data set to banks that has a complete set of information on the above described metrics. This yields our main dataset of 199 consolidated banking groups or nodes, whose total assets amount up to EUR 74 trillion, approximately 6.6 times the GDP of the euro area.

Overall, this brings us to a total number of large exposures equal to 1.734, and a total gross amount of EUR 1.38 trillion, or EUR 675 billion of risk-weighted assets. This subset of SIs’ large exposures to credit institutions cover almost 80% of the total gross amount. Furthermore, the selected sample of counterparties guarantees a complete coverage of LEI codes, country of domicile and sector of belonging.

Table 1 presents the summary statistics of the interbank network of large exposures in Q3 2017. It consists of 179 counterparties and 101 reporting institutions, for a total of 1264 exposures (or edges), of which, almost 90% (1185) is reported by euro area-based banking groups, and the remaining 10% (79) from international banks. The latter provides a partial picture of the euro area banks’ funding risk related to non-euro area creditors. On the contrary, euro area banks’ credit risk is captured in its entirety, and it is distributed almost equally between euro area and extra-euro area counterparties, amounting to respectively 613 and 651 in terms of number of exposures (edges) or EUR 431 billion and EUR 432 billion (gross amount minus exemptions). In comparison, gross exposures before exemptions and CRM (gross amount) amount to EUR 1.13 trillion, while after exemptions and CRM, exposures amount up to EUR 623 billion (net amount). No bilateral-linkages among extra-euro area banks are captured in this study. To the best of our knowledge, in terms of coverage this dataset represents the most comprehensive attempt to study euro area systemic risks by means of granular bank and exposure level-information.

Table 1.

Interbank Network of Large Exposures

article image
Note: Amounts are expressed in billions of euros. Outstanding amounts as of Q3 2017. Gross amount minus exemptions is the reference metrics of this study. A 50 million threshold to exposures before credit risk mitigation was applied. Exemptions are those amounts which are exempted from the large exposure calculation, whereas credit risk mitigations refer to the amounts adjusted for risk weights.

3.3 Network Topology

We are now in the position to plot the euro area interbank network of gross large exposures. For a graphical interpretation of the exposures’ directionality, we drop from the interbank network the 79 funding linkages from international banks. This allows us to split each chart into two concentric circles: an inner-circle comprising euro area banks’ credit exposures among themselves (EA), and an outer-circle depicting the international dimension of euro area banks’ credit exposures vis-a-vis non-euro area banks (INT). Therefore, all lines linking the inner circle to the outer circle are unidirectional.

Figure 1 presents two complementary visualizations of this network: one scaled by absolute amounts in euro terms (left) and the other in normalized terms with respect to banks’ capital (right). Each line takes the color of the borrowing entity, reflecting the source of credit exposures. The node size is proportional to the weighted in-degree metric and, thus, signifies systemicity of each bank from credit perspective.15

Figure 1.
Figure 1.

Euro Area Interbank Network of Gross Large Exposures (September 2017)

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Source: COREP C.27-C.28.Note: The node size captures the weighted in-degree centrality associated with each entity. The colors of nodes are clustered by country of origin, the thickness of the flows summarizes the value of the exposures respectively in EUR billions and percentage of eligible capital. The color of the lines refers to the source of exposures, matching the borrowing entity’s color.

On the one hand, panel (a) identifies which international banking system is the most interconnected with the euro area banking system based on absolute value of exposures. For instance, euro area banks appear to have a few but sizeable (thick) exposures to Chinese banks, many but relatively small exposures to Swiss banks, and many and sizeable exposures vis-a-vis the US and UK banks. The US banks seem to be the most systemic based on credit exposures of EA banks. Overall, international spillovers seem to be an important channel of contagion to the euro area banking system.

On the other hand, panel (b), where credit exposures are normalized with respect to lenders’ capital, highlights the relationships that have greater materiality. A visual inspection reveals that some international exposures with large absolute amounts indicated by thick lines in panel (a) fade when normalized in panel (b), for example those vis-à-vis Chinese banks, while some within-EA exposures (in the inner circle) become more prominent. This is driven by the large capital size of some exposed EA entities with the highest number of links to non-EA banks while the smaller EA banks are most interconnected with other EA banks. Accordingly, the node size of international banks becomes slightly smaller with those of a few EA banks taking over US banks in node size. In this respect, contagion from non-EA banks is likely to pass through large EA banks with potential second-round effects on the small-to-medium sized EA banks. Overall, the euro area interbank network of large exposures can thus be characterized by a core-periphery network structure that tends to have a relatively sparse density. In fact, only 6.3 percent of all possible links are present in this interbank network.

4. Contagion Mapping (CoMap) Methodology

This paper relies primarily on a balance sheet simulation approach to map contagion. In addition to demonstrating the architecture of banking networks through bilateral linkages, such an approach also allows us to quantify systemic losses and determine channels of contagion by assuming hypothetical failures in the network. The emphasis on granularity in establishing bilateral connections applies equally in modeling contagion. By incorporating model parameters which are calibrated based on bank-specific, and to the extent possible exposure-specific, data allows us to design a contagion model that improves the precision of the overall assessment as well as bank-specific results.

4.1 Modelling Framework

Our Contagion Mapping model (CoMap) is essentially a variant of the Eisenberg and Noe (2001) framework. However, instead of using their fictitious default algorithm, which effectively treats subsequent bank defaults as simultaneous events, we opt for a sequential default algorithm developed in Furfine (2003). This framework has been at the center of many applied studies in the financial networks literature. Our starting point is a simple interbank exposure model with both credit and funding shocks.16 Credit shocks capture the impact of a bank defaulting on its liabilities to other banks. Funding shocks, on the other hand, represent how a bank’s withdrawal of funding from other banks forces them to deleverage by selling assets at a discount (fire sale). Triggering a distress event (single or multiple bank failures) reveals the cascade effects and propagation channels transmitted through these solvency and liquidity channels. In order to achieve a more realistic setting, we enrich this simple framework with a series of new features that reflect heterogeneity across banks, one of the novelties of this paper. Specifically, we model the effects of: (i) bank-specific default thresholds, such as minimum capital requirements and capital buffers; (ii) changes to the network structure via large exposure limits; (iii) variations in exposures at risk (loss-given-default); (iv) maturity structure of bank funding; (v) market risk linked to a bank’s business model captured by the amount of financial and HQLA assets on a bank’s balance sheet; (vi) changes in bank-specific LCR ratio due to adjustments in the liquidity buffer and/or the net liquidity outflows. As a result, this comprehensive modelling framework is able to capture the risk-return trade-off a bank faces between holding HQLA and non-HQLA financial assets and allows for assessing both solvency and liquidity risk while accounting for bank-specific parameters. Hence, it incorporates (vii) liquidity constraint on the amount of assets available for sale allowing a bank to default because of being illiquid. These seven distinctive features are jointly modelled in our framework.

The initial set-up of our model, while closely following Espinosa-Vega and Sole (2010), expands the scope beyond interbank loans to capture all interbank claims.17 This is reflected in the stylized balance sheet identity of bank i as follows:

jkxijk+ai=ci+di+bi+jxji(1)

where xijk stands for bank i’s claims of type k on bank j, ai stands for other assets, ci stands for capital, di stands for deposits, bi are other debt obligations (including wholesale funding but excluding interbank transactions), and xjik stands for bank i’s total obligations vis-à-vis bank j, or conversely, bank j’s claims on bank i. Z is the complete set of all banks in the network with a total of N number of banks.

Next, we introduce the key elements of our baseline model that will be used as a reference framework in the remainder of this paper.

Credit Shock

Credit shock captures the impact of a bank or a group of banks defaulting on their obligations to other banks. As a result, a bank incurs losses on a share of its claims depending on the nature and counterparty of its exposures. Other studies have assumed uniform loss-given default rates, be it at entity level or for the entire network.18 In practice, different claims may have different recovery rates. For example, the recovery rates from equity stakes and debt claims can vary. We introduce exposure-specific loss-given default rates to reflect the precise risk mitigation and collateralization a bank has accounted for its claims vis-à-vis each counterparty. In response to a subset of banks, y ⸦ Z, defaulting on their obligations, bank i’s losses are summed across all banks j ⊂ y and claim types k using exposure-specific loss-given default rates, λijk corresponding to its claim of type k on bank j, xijk :

jykλijkxijk,where λijk[0,1] and iy(2)

The total losses are absorbed by bank i’s capital while the size of its assets is reduced by the same amount.

iz\ykxijk+[ai+jyk(1λijk)xijk]=[cijyλijkxijk]+di+bi+jxji(3)

As a result, bank i’s balance sheet shrinks, with lower capital, ci´, reflecting the losses. The recouped portion of its claims are commingled with other assets, ai´

iz\ykxijk+ai´=ci´+di+bi+jxji(4)

Figure 2 illustrates the transmission of credit shock via bilateral linkages on bank i’s balance sheet.

Figure 2.
Figure 2.

Impact of Credit Shock on Bank i’s Balance Sheet

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Funding Shock

Funding shock represents how a bank’s withdrawal of funding from other banks forces them to deleverage by selling assets at a discount (fire sale). Typically, an assumption is made about the share of short-term funding that cannot be rolled over and the haircut rate that must be applied to the fire sale of assets to meet the immediate liquidity needs. This would result in losses on the trading book, which would then be absorbed by the capital base. We introduce bank-specific funding shortfall rate, ρi, reflecting precisely the maturity structure of bank i’s wholesale funding. In response to a subset of banks defaulting (getting into distress), yz, and thereby withdrawing funding from other counterparties, bank i faces funding shortfall summed across all banks j ∈ y using its specific funding shortfall rate, ρi:

jyρixji,where ρi[0,1](5)

One important piece missing in previous simulation studies is the absence of liquidity buffer that banks might have, which can help absorb funding shocks. We introduce to the model banks’ ability to hold liquidity surplus, which can be used to absorb these shortfalls, at least partially. In order to mitigate banks’ short-term funding risk, regulators have imposed liquidity coverage ratios (LCR) to ensure that banks have sufficient high-quality liquid assets (HQLA) to cover liquidity shortages. In practice, for immediate liquidity needs, banks can pledge HQLA as collateral to the central bank for overnight borrowing. From a modeling perspective, this implies that bank i can offset funding shortfall with the new credit line up to a limited amount, that is the surplus HQLA in excess of minimum regulatory requirement of 100 percent LCR. We call the surplus HQLA of a bank its liquidity surplus, γi:

minγi,jyρixji(6)

with the remaining liquidity shortage computed as:

max0,jyρixjiγi(7)

In our model, a bank is pushed toward a fire sale when it has exhausted emergency credit lines from the central bank, that is, if the remaining liquidity shortage (7) emanating from the funding shock is strictly positive.19 Previous studies that modeled funding shock can generate unlimited fire-sale losses without considering that banks have only a limited pool of assets that can be sold. At this point, we introduce a constraint, θi, on the amount of remaining assets available to the bank to sell. We refer to this parameter as pool of assets and calibrate it based on unencumbered non-HQLA assets (see section 4.2 for more details). This constraint sets an upper threshold to how much of the remaining liquidity shortage can be matched with the fire-sale proceeds after accounting for haircuts proportional to a discount rate, δi. As a result, the deleveraging amounts to the sale of assets equivalent to:

min11δimax0,jyρixjiγi,θi,where δi[0,1](8)

As in credit shock, the losses due to the fire sale are absorbed fully by bank i’s capital. The other liabilities of the bank decline by the amount of funding shortfall that couldn’t be replenished by central bank loans. The sum of the two declines are matched by the contraction on bank’s assets due to fire sales.

jkxijk+aimin11δimax0,jyρixjiγi,θi=ciδimin11δimax0,jyρixjiγi,θi+di+bi+minγijyρixji+jz\yxji+jy(1ρi)xji(9)

Overall, the balance sheet of the bank can potentially shrink by a larger factor than the associated capital losses in contrast with the credit shock. On the liabilities side, there is a shift in wholesale funding from other banks to the central bank to the extent that surplus HQLA can be used as collateral to borrow from the central bank. This is reflected as an increase in bi, other debt obligations as shown in equation (9). The remaining liquidity shortage, if exists, is met by fire-sale proceeds and therefore results in shrinkage of the bank’s balance sheet.

jkxijk+ai˝=ci˝+di+bi˝+jz\yxji+jy1ρixji(10)

Figure 3 illustrates the transmission of a funding shock via bilateral linkages on bank i’s balance sheet when the liquidity surplus is sufficient to meet funding shortfall. As can be seen on the right-hand-side diagram, bank i’s balance sheet remains intact with no impact on its capital. The only change is a shift from interbank funding to central bank funding. On the assets side of the balance sheet, all HQLA, whether encumbered or unencumbered, is included in ai. On the other hand, Figure 4 shows the transmission of the shock when liquidity surplus is insufficient and hence the remaining liquidity shortage must be matched by fire-sale proceeds.

Figure 3.
Figure 3.

Impact of funding shock on bank i’s balance sheet with sufficient buffer

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Figure 4.
Figure 4.

Impact of funding shock on bank i’s balance sheet with insufficient buffer

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

On the other hand, Figure 4 shows the transmission of the shock when liquidity surplus is insufficient and hence the remaining liquidity shortage must be matched by fire-sale proceeds. This leads to shrinking of bank i’s balance sheet due to deleveraging, effectively reducing bank’s debt liabilities but also resulting in fire-sale losses reducing its capital.

Simultaneous Credit and Funding Shocks

While it is helpful to consider credit and funding shocks in isolation, when a bank or a group of banks are in distress, they are likely to default on their obligations and shore up liquidity by withdrawing funding simultaneously.20 Therefore, we combine the impact of both shocks on bank i’s balance sheet to capture the full impact of a distress event.

ai+jyk(1λijk)xijkmin11δimax0,jyρixjiγi,θi+jz\ykxijk=cijykλijkxijkδimin11δimax0,jyρixjiγi,θi+di+bi+minγi,jyρixji+jz\yxji+jy(1ρi)xji(11)

Default Mechanisms

Up to this point, we focused on how credit and funding shocks are transmitted to a bank’s balance sheet. While credit shocks translate directly to weakening of a bank’s capital, funding shocks lead to depletion of its liquidity and via fire sales to capital losses. Now, we define at what level these losses result in a severe distress for a bank triggering its default.

In a distress event, the capital of exposed counterparties, such as bank i, must absorb the losses on impact. Then, bank i becomes insolvent if its capital falls below a certain threshold cid, which may be defined as the bank’s minimum capital requirements with or without capital buffers. In other words, bank i is said to fail if its capital surplus (cicid) is insufficient to fully cover the losses:

cicid<jykλijkxijk+δimin11δimax0,jyρixjiγi,θi(12)

As shown in equation (12), bank i suffers both credit losses and fire-sale losses. However, in our framework, potential fire-sale losses are mitigated by two important factors. First, liquidity surplus can absorb fully or partially the funding shock and there is an upper boundary to fire-sale losses defined by the bank’s pool of assets. When these two factors are not included in the model as in Espinosa-Vega and Sole (2010), contagion losses can be significantly overestimated, which can result in greater number of defaults.21

In terms of the impact through the liquidity channel, bank i’s liquidity surplus serves as the first line of defense. However, the remaining liquidity shortages might require a large-scale fire sale operation relative to its financial assets. Having already exhausted its liquidity surplus, bank i becomes illiquid if its remaining assets are insufficient to meet the liquidity shortage:

θi<11δimax0,jyρixjiγi(13)

Notably, in our framework, a bank may default contemporaneously via solvency and liquidity when inequalities (12) and (13) are jointly satisfied. This implies that the funding shortfall is larger than the funds retrieved from the liquidity surplus and the fire sale operations, and, at the same time, the cumulated losses incurred via credit losses and fire sales are larger than the capital surplus.

Bringing the full network of banks into picture, in each simulation the exercise tests the system for a given bank’s default as depicted in Figure 5. The initial default of bank 1 is triggered by design in order to study the cascade effects and contagion path it causes through the interbank network. According to this example, the trigger bank is linked to bank 2 and bank 4 via credit and funding exposures, x12 and x14, and x21 and x41, respectively. The initial shock determines the subsequent bank 2’s default

since there is at least one additional failure in response to the initial exogenous shock. In this round, banks’ losses are cumulated in calculation of their distress conditions. Therefore, bank 4’s losses experienced by bank 2’s default (via exposures x24 and x42) in round 2 are summed up with losses induced by bank 1 in round 1 (via exposures x12 and x21). Although, the initial default of bank 1 does not directly induce bank 4’s default, due to contagion and amplification effects, bank 4 faces default in round 2. In turn, bank 4 triggers the default of bank 3 in round 3. The exercise moves to subsequent rounds if there are additional failures in the system and stops when there are no new failures.

Figure 5.
Figure 5.

Contagion Path and Rounds to Defaults

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: The trigger bank initializes the algorithm, rounds track the path of contagion via internal loops, while final failures define the convergence of the algorithm.Source: Inspired by Espinoza-Sole (2010.

4.2 Calibration

The typical approach in the application of balance sheet simulation exercises has been to use benchmark parameters based on cross-country studies or sectoral averages. Few studies introduced some improvement by using random drawings from a distribution of observed values. One of the main contributions of this paper is to model bank-level heterogeneity with granular exposure and other balance sheet information. In the following, we describe in detail how we calibrate the bank-specific parameters for the set-up of our benchmark model.

Loss Given Default

The loss given default (LGD) parameter is calibrated for each bank at exposure level by calculating the ratio of net exposures to gross exposures. Gross exposures (GE) are defined as those after deducting defaulted amounts and exemptions from original gross exposures. Net exposures (NE) refer to the remaining exposures after adjusting gross exposures for credit risk mitigation measures. In other words, if bank i is lending to counterparty j, the exposure-specific LGD is defined as in equation (14). Non-reporting banks in the sample are assumed to have a uniform LGD equal to the average λ¯ across all reporting banks.

λi,j=NEjGEj=LGDi,j(14)

On the one hand, panel (a) of Figure 6 presents the distribution of the exposure-specific loss given default parameters (λi,j). The red line shows the average of the sample (λ¯) upon which is based the calibration for the non-reporting banks. The average net exposure amount is 80% of the gross amount after deducting exemptions. Panel (b) reports the distribution of exemptions across exposures. Both samples are concentrated respectively on the right and left side of the distribution, though crossexposure heterogeneity is visible.

Figure 6.
Figure 6.

Exposure-Specific Loss Given Default Parameter

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: The LGD parameter is calculated on an exposure basis as the share of the net exposure (after CRM and exemptions) over the gross exposure amount before taking into account CRM, but after exemptions. The exemption rate shows the share of exempted amount over the original gross amount before deducting exemptions and CRM.Source: COREP Supervisory Data, Template C.28.00.

Funding Shortfall

Funding shortfall refers to the portion of withdrawn funding that is assumed not to be rolled over when the bank providing the funding defaults (or gets into distress). It is calibrated at bank-level using the share of short-term liabilities shorter than 30 days. The choice of this maturity threshold as baseline calculation is to allow the funding shortfall to be consistent with the Liquidity Coverage Ratio (LCR) which assumes a 30-day liquidity distress scenario. However, this assumption may be relaxed and ρi can be calibrated on a shorter or longer period depending on the scenario we want to test.

For each bank, we use exposure level information retrieved from the concentration of funding template (C.67.00.a) and the large exposure maturity breakdown template (C.30). The former template allows us to retrieve information on the exposures’ amount and maturity breakdown on international banks lending to euro area banks. Therefore, as reported in equation (15), the funding shortfall is calibrated based on the share of exposures in buckets with maturities of less than 30 days over the total amount of funding, aggregated across all reporting banks for whom bank i is a large exposure counterpart (Fi).22

ρi=jFixi,j<30daysjFixi,j=Short Term FundingTotal Funding(15)

When no maturity information is available, we use the average maturity to which the reporting banks having an exposure to bank i are lending at to other banks. Therefore, we assume that the maturity information of the reporting bank is more accurate than setting ρi equal to the average of the sample. This approach allows us to increase heterogeneity in the distribution of the funding shortfall parameter.

As we see in Figure 7 (Panel a), banks’ short-term funding as share of total funding is distributed on the whole range of the maturity breakdown, with banks experiencing an average of 35% of short term funding over total funding.

Figure 7.
Figure 7.

Bank-specific Funding and Liquidity Parameters

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Funding shortfall is constructed as short-term funding divided by total funding. Liquidity Surplus (y) is constructed as the difference between the numerator and the denominator of the liquidity coverage ratio (LCR), i.e. the difference between the stock of HQLAs (LB) and the net funding Outflows (NFO).Source: COREP Supervisory Data, Template C.30, Template C.67.00.a, Template C.72.00.a and Bankscope.

Liquidity Surplus

The liquidity surplus is directly derived from the liquidity coverage ratio template C.72.00a. It consists of the difference between the LCR’s numerator and denominator since the former, as of 2018, needs to be larger than 100% of the latter (equation 16). Hence, the liquidity surplus i) refers to the stock of HQLAs (LBi) above the net funding outflows (NLOi) over a 30-day liquidity distress scenario. Figure 7 (Panel b) reports the surplus as share of banks’ total assets. The average of the sample is close to 5.8% which is used for approximating the missing i) for some international banks. Furthermore, if a bank is currently facing a transition period to achieve the 100% LCR ratio, we set γi = 0 whenever NLOi > LBi.

LCR:LBiNLOi>1yieldsLBi>NLOiyieldsγiLBiNLOi>0(16)

Fire-sale Discount Rate and Pool of Assets

The additional parameters required to simulate the contagion impact of a funding shock is the rate at which banks are forced to discount their assets as they react to a funding shortfall by deleveraging. Since, as the described in the previous section, we assume that the set of HQLA assets are used to cover the liquidity shortfall, and the fire sale stage is triggered only when it is exhausted, the set of assets available for sale is defined as the amount of unencumbered non-HQLA assets. This category of assets is retrieved from the asset encumbrance template F.32.01 which is further broken-down into different asset classes. In this respect, Equation (17) approximates the discount rate i) as the ratio between the discounted amount of unencumbered non-central bank eligible assets (D_UNCBEAi) over the total amount of unencumbered non-central bank eligible assets (UNCBEAi), which captures the pool of assets available for sale (θi). Therefore the δi coefficient for euro area banks is derived as the weighted average of haircuts applicable to each asset class (reflecting each bank’s portfolio: covered bonds (δ¯CB), asset-backed securities (δ¯ABS), debt securities issued by general governments (δ¯GG), debt securities issued by financial corporations (δ¯FC), debt securities issued by non-financial corporations (δ¯NFC), and equity instruments (δ¯E).. The various discount rates by asset class are based on the latest ECB guidelines on haircuts.23 Moreover, in order to take into account that the instruments we are dealing with are non-central bank eligible, we assume that the bottom threshold for haircuts is the highest haircut for central bank eligible instrument, i.e. 38%.

δi=δ¯CBCBi+δ¯ABSABSi+δ¯GGGGi+δ¯FCFCi+δ¯NFCNFCi+δ¯EEiUNCBEAi(17)

For international banks for which we lack FINREP template F.32.01, we derive the discount rate δt and the pool of assets available for sale t) with a two-step procedure. First, we regress the balance sheet categories i) assets available for sales, ii) assets held for trading and iii) HQLA assets for the euro area banks sample as reported in equation (18) on the numerator and denominator of equation (17).

jNδi,j¯Ai,j=a1FAASi+a2FAHTi+a3+HQLAi+ei(18)

In this way we obtain three coefficients a1, a2, a3 explaining the contribution of each asset class for both dependent variables. As we can see from Table 11, the first two coefficients are statistically significant at 1% and the model shows a reliable goodness of fit, respectively 89% and 86% for the numerator and denominator of equation (17). Next, we retrieve from Bankscope the very same balance sheet categories for which we have a statistically significant coefficient, i.e., financial assets available for sale and financial assets held for trading. Hence, the second step consists in multiplying each balance sheet category for the relative estimated coefficients to derive the numerator and denominator of equation (17) for the sample of international banks and so obtaining the discount rate i) and the pool of assets i)

Table 2.

Step 1-Regression Results for Euro Area Banks Sample

(In Percent)

article image
Note: Standard errors in parentheses.

p<0.01

p<0.05

p<0.1

Source: Authors’ calculations.

Figure 8 depicts respectively the bank-specific discount rate i) and the pool of assets available for sale i), the latter as share of total assets. As can be noticed, the bank-specific discount rate i) is centered around 57.5% and resembles a normal distribution, whereas the pool of non-central bank eligible assets is left skewed, with a mean centered around 4% of total assets and outliers reaching an amount higher than 20%.

Figure 8.
Figure 8.

Bank-specific Fire-sale Parameters

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: the δ coefficient reflects a weighted average haircut of the portfolio θ for non-central bank eligible instruments.Source: COREP and FINREP Supervisory Data, Template F32.01 and Bankscope.

Overall, the nested set of liquidity and fire-sale parameters (γi, θi, δi), depicted in Figure 9, captures the degree of heterogeneity characterizing the liquidity strategies of banks in our sample. For instance, a bank may choose to hold a larger amount of HQLAs as a share of total assets i) – the area below the 45-degree line – than the pool of unencumbered non-HQLA financial assets (θi) – the area above the 45 degree line. Banks belonging to area (A) are those that may most likely suffer capital losses by liquidity shocks since the liquidity surplus may easily become binding, and in turn may trigger fire sales. On the contrary, banks belonging to area (B) are those that may most likely experience a liquidity default when the liquidity surplus i) is depleted. In this case, the pool of assets θi is likely to be insufficient to cover the remaining liquidity needs. In the end, the quadrant (C) captures those banks that are short of both buffers and are clear candidates for the realization of the liquidity default. Furthermore, the above-mentioned effects are far more pronounced when the bubble size is large (red bubbles), since it implies that they will face a harsher discount rate via fire sales. The realization of these dynamics (A, B, C) is conditional to the amount of short-term bilateral exposures ρixih, which, in the end, determines the spread of contagion within the interbank market.

Figure 9.
Figure 9.

Liquidity Default Dynamics

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Bubble size is proportional to a bank’s discount rate (5). Red bubbles highlight the 90th percentile of the discount factor. The vertical line of bubbles close to 5.5% liquidity surplus is due to missing values for extra-EA small-medium size banks which were approximated by using the mean of the sample.Source: Authors’ calculations.

Distress-Default Threshold

A key assumption of the model is to define when counterparty bank i is not able to meet its payment obligations, i.e. a default or distress threshold (cid). Accordingly, a bank can be considered in default/distress when the surplus of capital above the capital requirements a bank needs to meet at any time is depleted.

For our simulations we distinguish between two types of capital requirements: (i) minimum capital requirements and (ii) capital buffers. The former requires banks to hold 4.5% RWAs of minimum capital (MC).24 This minimum requirement might be higher depending on the bank-specific Pillar 2 requirement (P2R) set by the supervisor.25 In addition to this, a bank is required to keep a capital conservation buffer (CCoB) of between 1.875% and 2.5% CET1 capital as of 2018 (depending on the extent to which the jurisdiction where the bank is located has fully or only partially phased in the end-2019 requirement), and a bank-specific buffer, which is the higher among the Systemic Risk Buffer (SRB), GSII and OSII buffers.26 Furthermore, some jurisdictions also apply a positive counter-cyclical capital buffer requirement (CCyB). In this regard, we retrieved bank-specific information on minimum capital requirements (CET1, TIER1, Own Funds) and capital buffers from COREP supervisory templates C.01, C.03 and C.06.01 and the bank-specific risk weighted assets (RWAs) from C.02. For international banks, our data source is Bankscope.

Therefore, the capital surplus (ki) can be defined in two ways: a capital surplus (kiDF) above the minimum capital requirements defined as a default threshold (ciDF) reported in equation (19a), or a capital surplus (kiDS) above the sum of the minimum capital requirements and the capital buffers defined as a distress threshold (ciDS) presented in equation (19b). When the bank breaches the minimum capital requirement (ciDF) it is assumed that the supervisor would declare the bank for “failing or likely to fail” (which is the official trigger for putting the bank into resolution).27 When the bank breaches the buffer requirement (ciDS) while not yet breaching the minimum capital requirement, it is assumed that it will not be declared failing but that it would rather be constrained in its ability to pay out dividends. This, itself, could be a trigger for bank distress and is thus considered as an alternative trigger threshold.

kiDF=ciciDF=ci(MCi+CCoBi+P2Ri)(19a)
kiDS=iciDS=ci(MCi+CCoBi+P2Ri)+max(SRBi,GSIIi,OSIIi)+CCyBi(19b)

Hence, this calibration method allows for some flexibility on the determination of a bank’s default depending on the purpose of the exercise. While from a resolution authority and supervisory perspective the capital surplus (kiDF) based on the default threshold (ciDF) may be the more relevant reference point, the distress threshold (ciDS) may be of interest to macroprudential supervisors. As this paper has a systemic risk focus, we will provide results based on the latter approach. This is further motivated by the fact that the inclusion of the macroprudential buffers (SRBi GSIIi OSIIi, CCyBi) allows to take into account the impact of macroprudential policy actions.28 Nevertheless, we discuss the differences in the two approaches in the sensitivity analysis section.

Finally, an additional feature that needs to be taken into consideration in order to accurately handle heterogeneity in the bank-specific capital surplus concerns the type of capital used in the calculation. In fact, both the capital base (ci) and the minimum capital (MCi) and pillar 2 requirements (P2Ri) may vary whether the capital considered is CET1, TIER1, or own funds calculated as the sum of TIER1 and TIER2 instruments. For instance, MCi are respectively 4.5% of RWAs for CET1 capital, 6% of RWAs for TIER1 capital, and 8% of RWAs for own funds. In turn, these differences are also reflected in the capital base (ci). Hence, this implies that the very same bank may face a larger or smaller capital surplus (kiCET1kiTIER1kiOF) depending on the definition of capital considered. In this study, we use as benchmark the CET1 ratio, although we provide in the results section evidence for the robustness of our findings to this calibration feature.

Panel (a) of Figure 10 depicts the distribution of the CET1 capital surplus based on a distress threshold, while panel (b) presents the contribution of the capital surplus (distress threshold) and the distress threshold to the capital base.29

Figure 10.
Figure 10.

Bank-Specific Distress Threshold

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: The sum of minimum capital and capital surplus gives total capital ©. The decreasing ordering is based on total capital. Source: COREP Supervisory Data Templates C.01-C.03, and Bankscope.

Overall, the advantage of implementing bank-specific thresholds is twofold. It allows us to tailor a realistic distress-default threshold, which determines the capital surplus of each bank and to perform scenario and counterfactual analyses by imposing higher bank-specific capital requirements or by reducing the capital surplus under an adverse scenario.

4.3 Model Outputs

This exercise is tailored to rank banks for their contribution to systemic risk in terms of degree of vulnerability and potential contagion with respect to the euro area banking system. Considering a policy maker’s perspective, each bank is evaluated upon four main model-based outputs, as follows:

Contagion index (CI)

CI captures each bank’s potential contagion (i.e., systemic impact) by taking weighted average of losses of all other banks in percent of their full capital base.30 Bank-specific contagion index is calculated by dividing system-wide losses induced by bank i with total capital in the system (excluding bank i);

CIi=100jiLjijiki(20)

where Lj,i is the loss experienced by bank j due to the triggered default of bank i. This indicator then can be used to compare banks in the network in terms of how much contagion each bank causes to the system if it was to experience severe distress, a tail event.

Vulnerability index (VI)

VI gauges each bank’s degree of vulnerability averaged across all individual default events with identical probability. Bank-specific vulnerability index is calculated by taking the average of losses experienced by bank i across N-1 simulations in percent of its own capital.

VIi=100jiLji(N1)ki(21)

where Li,j is the loss experienced by bank i due to the triggered default of bank j. This indicator can be used to compare fragility of banks to systemic events. Banks that in average incur greater losses due to their exposures are deemed more vulnerable. The average losses take into account both the magnitude of a bank’s losses in response to each default event and the frequency with which this bank experiences losses by treating each default with equal probability.

Contagion default (CD)

CD tracks the number of banks that experience severe distress associated with the triggered default of bank i.31 Whereas contagion index measures the degree of losses within a continuous range associated with a default event, contagion default is a discreet indicator based on a binary “pass or fail” outcome. It gauges how many other banks in the network fall below the capital distress threshold.

Default frequency (DF)

This indicator tallies the total number of simulations under which bank i falls below the capital distress threshold. Similarly, whereas vulnerability index measures the degree of losses within a continuum, default frequency is a discreet indicator, gauging the binary outcomes.

Essentially, losses experienced by each bank (Lji) is the sum of losses associated with credit risk (LCrji) and losses associated with funding risk (LFuji). Hence, each indicator can be broken down to the respective contributions by credit risk (CI_Cr and VI_Cr) and funding risk (CI_Fu and VI_Fu) providing insights to the nature of contagion.

CI_Cri=100jiLCrjijiki,and CI_Fui=100jiLFujijiki(22)
VI_Cri=100jiLCrjijiki,and VI_Fui=100jiLFuijjiki(23)

These indicators can be further decomposed based on banks’ geographical origins. For example, CI_EAi is a subindex based on the total induced losses by bank i to the subset of banks that are in the euro area. Similarly, VI_EAi is a subindex based on the average losses experienced by bank i across the subset of simulations where the triggered banks are in the euro area. Essentially, these two indices capture a given bank’s contagion and vulnerability vis-à-vis euro area banks.

CI_EAi=100jiLjijikj,iSEA,and VI_EAi=100jiLijjiki,jSEA(24)

where 𝕊EA is the subsample of banks in the euro area. The geographical focus can be based on distinguishing between euro area and non-euro area banks as well as between individual countries where banks are domesticated. Moreover, based on these outputs, we develop two additional indicators to deepen both analytical assessment and policy implications of contagion analysis and in turn facilitate the impact assessment of regulatory actions.

Amplification ratio (cascade effect)

This metric compares the losses induced by a bank’s simulated default in the initial round vis-à-vis those occurring in all successive rounds. From the perspective of a bank, bank i, triggering system wide contagion:

AMP(C)i=jiLjir1+jiLjir1(25)

where represents the initial round and r1+ all successive rounds in a given simulation. This indicator measures how much of the system-wide impact from the failure of bank i is caused by cascading of defaults rather than direct and immediate losses from bank i. Hence, the higher the ratio, the larger the amplification through the network, and a ratio greater than 1 indicates that losses due to cascade effects dominate direct losses.

Conversely, banks’ susceptibility to systemic events can be split into two similar components to distinguish how much of the losses experienced by bank i across all simulations were immediate losses as opposed to losses in successive rounds:

AMP(V)i=jiLjir1+jiLjir1(26)

Amplification ratio quantifies the degree to which cascading behavior impacts the banks both at system-wide and entity level. These are the losses associated with contagion spread through indirect linkages, making this indicator an important metric of the financial system architecture in the sense that it captures what portion of systemic risk is not directly observable to banks and, possibly, to the regulators in the absence of granular data. Most importantly, the amplification ratio gauges the tipping points – critical points where an abrupt change occurs – in the system. In the network literature, tipping points refer to those nodes, which amplify shocks and propagate the damage. It is therefore critical to have a direct measure of such non-linearities in the system.

Sacrifice ratio

One of the most important policy questions revolves around identifying critical financial institutions. Interconnectedness has been recognized as an integral part of an indicator-based measurement approach proposed by the Basel Committee on Banking Supervision (BIS, 2018). It is therefore essential to quantify on a spectrum the concept of “too interconnected to fail”. In this respect, we construct a sacrifice ratio, which measures the ratio of system-wide losses due to the failure of a bank over the cost of recapitalizing the bank by an amount equal to its capital requirements.32 33

SRi=jiLjiciDS(27)

Therefore, ratios above 1 are associated with system-wide losses that could be avoided with relatively smaller recapitalization cost. In contrast, ratios smaller than 1 would imply that potential system-wide losses are not sufficiently large to classify the entity as too interconnected to fail. We provide three types of sacrifice ratios from the perspectives of: (i) a global central planner; (ii) a euro area authority; and (iii) a national authority. These three measures take into account system-wide losses respectively induced to all banks in the system (global central planner) or to those banks belonging to each jurisdiction, whether it is euro area based, or national. It is important to underline that our modelling strategy does not take into account the mitigation and amplification effects induced by a bail-in mechanism, neither for the triggering bank nor for the subsequent failing banks.34 Once the magnitude of system-wide losses associated with bank failures are understood, a critical policy question is how the regulators treat a bank that is deemed to be too interconnected to fail. This indicator can contribute directly to the cost-benefit analysis of bank bail-outs when a crisis is imminent. In other words, it gives regulators an additional tool to compare the public cost of making an entity whole again against the potential damages to the system when no action is taken.35

5. Results

This section presents a broad selection of results based on the CoMap exercise. Subsection 5.1 focuses on main findings on systemicity of banks (contagion indicators) while subsection 5.2 turns the spotlight on the fragilities in the system (vulnerability indicators). Then, the results of the CoMap exercise is compared vis-à-vis other reference models with partial or no bank-level calibration in subsection 5.3. Next, subsection 5.4 illustrates the use of a macro scenario instead of an exogenous bank-specific tail event as the driver of the shocks.

5.1 Main Findings on Systemicity

This subsection delves in greater detail into main findings of the exercise across a broad range of contagion indicators based on our benchmark model with bank-specific calibration. For the sake of clarity, out of 199 worldwide consolidated banking groups, Table 3 reports the top-50 default events ranked in terms of contagion index (CI_EA) with respect to the euro area banking system.

Table 3.

Contagion Measures

article image
Note: For confidentiality reasons bank names have been anonymized. The results in this table are ranked by CI_EA, where the subindex represents contagion losses to all EA banks in percent of their total capital base. This index is further decomposed into the respective contributions by credit (CI CR) and funding (CI FU) shocks. Defaults refer to the number of defaults a bank has induced in the system. Rounds indicate the maximum number of rounds it takes until no additional defaults take place in the system, whereas amplification ratio is the ratio of losses in subsequent rounds to losses in the initial round. The sacrifice ratio indicates the ratio of systemic losses caused by a bank over the cost of rescue package to fully recapitalize the bank. Contagion defaults are aggregated by sum for groupings, whereas all other indicators are aggregated by average.

Contagion index

The most systemic bank, as measured by CI_EA, causes EA-wide losses 3.8 percent of capital base at a scale around 12 times the full sample average of 0.3. The top 10 banks induce an average 2.5 percent capital losses (around EUR 25 billion) to the euro area banking system.36 Statistical tests show that power-law distribution fits the observed data on contagion losses to the system.37 While the top 10 banks are equally split between extra-EA (XEA) and EA, the composition of top 50 banks is slightly tilted towards XEA implying that international spillovers play a key role in how contagion spreads in the euro area. Therefore, cross-border risks may propagate quickly via bilateral exposures to the euro area banking network, as evident during the Great Financial Crisis of 2008. In terms of channels underlying contagion, losses due to credit risk dominate those due to funding risk. Having ample liquidity buffers to meet unexpected funding shortfalls allows EA banks to mitigate potential solvency distress transmitted through fire-sale losses.

Contagion defaults

When it comes to the nature of contagion defaults, overall illiquidity-driven defaults outweigh by far those triggered due to insolvency matching the historical experience as emphasized by Aikman et al. (2018). Notably, solvency defaults are mostly triggered by the top 10 banks, underlying how sources of contagion due to solvency risks may be highly concentrated on a few players due to their central role in the network as borrowers. While funding shocks may be triggered by a greater number of lenders in the network, their culmination to illiquidity defaults is largely driven by the intrinsic characteristics of the borrowing entities. As it is discussed in more detail in the next subsection under Default Frequency heading, this has an important policy implication on the funding concentration.

Amplification effects

One powerful feature of our framework is its ability to capture cascade effects due to an initial distress event. Amplification is measured in two ways: the total number of rounds it takes until contagion subsides and the ratio of system-wide losses in subsequent rounds relative to those that occur in the immediate phase. In most cases, the contagion does not spread beyond the direct counterparties in the first round. At most, the contagion cycle ends after a second round without further failures, and overall has limited repercussions on the system as a whole. There are only two banks with an amplification effect larger than 1 and whose failures cause losses 4.3 and 1.4 times more in subsequent rounds compared to the initial round.

Sacrifice ratio

In this exercise, 4 failures warrant recapitalization indicated by a sacrifice ratio above 1 at the national level. The average sacrifice ratio is around 0.3 for the top 50 banks. However, from the perspective of a euro area authority, the top 50 average increases up to around 0.8, with the ratio above 1 in favor of assistance in 5 additional cases. As for the three extra-EA banks, while the intervention would be justified even on the basis of potential damages to the EA system, they fall outside the jurisdiction of a EA authority. Therefore, it might be instructive to not only monitor the evolution of such spillovers but also to cooperate closely with respective national authorities as needed. From a global perspective, there are two cases under which interventions can be justified only when contagion to the entire network is considered vis-à-vis injecting capital to these banks: one EA bank and one extra-EA bank. This highlights how international cooperation is important to reduce negative externalities to the global financial system. It is important to note that across all perspectives, the average sacrifice ratios of top 10 banks are significantly lower than the top 50 or overall averages. This reflects the relative size of the top 10 banks and thus greater recapitalization needs that make interventions relatively costly even though these banks can cause the largest contagion losses to the EA system.

5.2 Fragilities in the System

Having investigated the various aspects of sources of contagion, it is important to understand the fragilities in the system from the standpoint of those banks that are the most vulnerable and how contagion affects them. Hence, Table 4 ranks banks according to their overall vulnerability index.

Table 4.

Vulnerability Measures

article image
Note: For confidentiality reasons bank names have been anonymized. The results in this table are ranked by overall VI, which measures average losses across all independent simulations in percent of a bank’s capital base. The index is further decomposed into the respective contributions by credit (VI_CR) and funding (VI_FU) shocks. Share of vulnerability from EA banks (VI_EA) versus extra-EA banks (VI_XEA) is shown in percentage points. Default frequency refers to the number of simulations in which a bank has experienced a default with a breakdown by whether insolvency or illiquidity drives the default. Amplification ratio is the ratio of losses in subsequent rounds to losses in the immediate round. Default frequencies are aggregated by sum for groupings, whereas all other indicators are aggregated by average.

Vulnerability index

Focusing on the top 50 banks with the highest vulnerability index, they are almost entirely from within the euro area. This is due to the fact that the large exposures dataset, as emphasized in section 2, mostly captures exposures of euro area banks, and for this precise reason, we adopt a euro area-centric view. This index, similarly, follows a power-law distribution as suggested by statistical tests. While the most vulnerable bank suffers greater portion of losses due to funding shock, which places it in the top position, overall credit shocks drive most losses. In terms of sources of contagion, extra-EA has a slightly larger share for the top 10 vulnerable banks, possibly indicating their international profile. EA banks take over as the larger contributor when the sample is expanded to top 50 or the entire network.

Default frequency

Out of 15 defaults experienced by the top 50 vulnerable banks, illiquidity defaults account for 10 incidents, while solvency defaults account only for 5. As previously mentioned, the intrinsic characteristics of the exposed entities seem to determine primarily whether they face illiquidity defaults rather than the banks that create the contagion. This is manifested in the concentration of illiquidity defaults on a few entities: 2 banks become illiquid in 4 different distress events each and another bank face illiquidity twice. In other words, funding risk seems to be less diversified. This finding may suggest that it may be prudent to consider limits on the concentration of funding, akin to large exposure limits on the assets side, since some small and medium-sized banks are exposed on the liability side to a few large banks.

Amplification

When looked at from the vulnerability perspective, again cascade effects seem to play a minor role in terms of amplification ratio. None of the top 50 banks shows an amplification ratio above 1 and no defaults take place in subsequent rounds.

In Figure 11, a systemic risk map combines information from contagion and vulnerability perspectives in order to allow an easy identification of threats to euro area financial stability. In this graph, the total EUR losses induced and experienced by each bank are normalized, respectively, along the y and x axes. The graph is then divided into four quadrants capturing varying degrees of banks’ systemic footprints. Banks in the upper-left quadrant (B) are those whose default would induce the greater amount of losses to the euro area banking system, while those lying in the bottom-right quadrant (C) are those most vulnerable to a default event. Banks located in the upper-right quadrant (D) are both highly contagious and vulnerable. Therefore, banks in the D quadrant are the most critical banks in the EA system as they are not only highly susceptible to contagion risks (inward spillovers) but they can amplify these spillovers by creating further contagion in the system. The systemic risk map provides a useful monitoring tool to assess vulnerabilities to euro area financial stability due to interconnectedness by identifying such tipping points in the system.

Figure 11.
Figure 11.

Systemic Risk Map

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Total contagion (losses a bank induces to EA banks) and vulnerability (losses a bank incurs) in absolute EUR terms (not in relation to capital base) are normalized between 0 and 1.

5.3 Bank-Specific Calibration

Modeling parameter heterogeneity renders a different picture compared with homogeneous parameters {λ, ρ, γ, θ, δ} approximated as sample averages uniformly applied to all banks (Special Case I). This divergence is further exacerbated when the liquidity dynamics are omitted in the transmission of funding shocks by setting bank-specific liquidity surplus (γ) to zero and the fire-sale constraint (pool of assets) to infinity (Special Case II).38 As shown in Figure 12, both the special cases overestimate losses when compared against our benchmark with bank-specific calibration and liquidity dynamics. In Special Case I, using average parameters cancels all the losses coming from funding risk (see Table 5 in Appendix B). This is due to the fact that the liquidity buffer (γ) is large enough to cover all shortterm funding needs. In Special Case II, further neglecting the liquidity surplus (γ) and the fire-sale constraint (θ) brings an over-estimation of funding risk. This is due to the assumption that each funding shortfall is directly transformed via fire sales with no upper bound into a solvency risk.

Figure 12.
Figure 12.

Bank-Specific Calibration

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Benchmark refers to results presented in section 4.1. Special case I sets (λ, ρ, γ, θ, δ) equal to the average of the sample, while special case II sets (λ, ρ, δ) equal to the average of the sample and γ = 0 and θ = ∞ We use the same distress threshold as in the benchmark case.

These findings highlight that bank-heterogeneity is an essential determinant of liquidity contagion since weak nodes are those channels amplifying the initial shock and creating cascade effects. Moreover, applying average LGD parameters to banks overestimates credit losses. The intuition behind this effect is that, in reality banks with larger and riskier exposures tend to demand a higher collateral amount, which results in a counterbalancing-risk behavior and a lower exposure-specific LGD vis-à-vis riskier counterparties.

Overall, the results from this exercise support the importance of calibrating the model with bank and exposure-level specificities and show how the inclusion of prudential regulations into the model parameters such as the HQLA surplus is essential to more precise estimation of liquidity risk. Modeling parameter heterogeneity leads to significant corrections and overall changes in the systemic risk patterns as well as bank-specific CI and VI indicators. In addition, this exercise clearly highlights the role played by weak nodes in amplifying contagion, and how neglecting them may lead to estimation bias.

5.4 Macro Stress Test Scenarios

The framework has so far been applied as an exercise that simulated the hypothetical failure of each bank separately. Bank defaults may occur for purely institution-specific reasons. However, often bank defaults (or distress) happen against the background of a widespread stress in the financial system. Under such stressed circumstances the contagion potential from one bank defaulting might be more pronounced as also the banks’ counterparts are in a weakened position. To explore the contagion risk under a generalized financial distress situation, we incorporate the EBA 2016 stress test adverse scenario in our framework.39

In this exercise, as shown in equation (28), we recalibrate the capital surplus (kiDS_ST16) of each bank in line with its capital depletion (ciST16) that resulted from the 2016 EBA Stress Test of EU banks under an adverse scenario (Figure 13).40

Figure 13.
Figure 13.

Distress Threshold Calibrated to EBA 2016 Stress Test Capital Depletion

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Source: COREP Supervisory Data Templates C.01 – C.03, and Bankscope.
kiDS_ST16=kiDSciST16(28)

Overall, as shown in 14, the weakened solvency position of euro area banks results in a disproportional increase across individual contagion indices, pointing to non-linearity of transmission mechanisms. The average losses caused in aggregate by the top 10 systemic banks increase by 1.25 times. In addition, accounting for an adverse macro scenario reshuffles the ranking of the most systemic banks. In contrast, the vulnerability ranking is more affected at the center of the distribution, with the most vulnerable banks preserving their position. Also concerning this indicator, the effects seem to suggest non-linearities as shown by uneven changes across banks. Overall, the number of contagion defaults more than triples increasing from 20 to 67. The average amplification and sacrifice ratios for the top 50 banks increase, respectively, from 0.05 to 0.25 and from 0.52 to 0.61 (for the euro area authority), with 2 additional cases of greater-than-1 sacrifice ratio, where recapitalizing the banks would cost less than contagion losses to the system.

Figure 14.
Figure 14.

Non-linear Effects of a Stress Test Scenario

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Benchmark refers to results presented in subsections 2.5.1 and 2.5.2.

6. Sensitivity Analysis

In this section we test the sensitivity of the results to a range of parameter assumptions to disentangle the key determinants of contagion and vulnerability of the euro area banking system.

6.1 Default threshold

The first sensitivity analysis tests a default threshold assumption against the more conservative distress threshold assumption used in the benchmark model. The distress threshold assumption takes a more conservative position by imposing a higher threshold that incorporates all relevant systemic risk and countercyclical buffers as described in more detail in Equations 19a and 19b.As can be observed in Figure 15 (Panel a), on average the affected banks operate with a capital surplus based on a default threshold which is 1% higher than the capital surplus based on a distress threshold (in RWA terms), with some outliers close to 3%. Nonetheless, this difference does not produce any material variation in the contagion and vulnerability indices. The explanation for this finding is that the most systemic banks are also the ones facing the highest macroprudential buffer requirements (SRBt, GSIIi, OSIIi) and generally did not fail in the benchmark exercise with a distress threshold, which by construction affords them smaller surplus to absorb losses than the default threshold. Moreover, the CCyB buffer is currently of a small magnitude, thereby reducing the capital surplus of the distress threshold by a small amount.

Figure 15.
Figure 15.

Sensitivity to Capital Measures

Panel (a). Distress vs. default threshold for hurdle rate

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Delta charts are ordered according to CET1 distress threshold.

6.2 Capital base

A second robustness checks whether changing the capital base from CET1 to own funds (total capital) affects the capital surplus based on the distress threshold in a sizeable manner. As reported in Figure 15 (Panel b), banks may face both a decrease or an increase in the capital surplus depending on whether the increased amount of minimum capital requirements (from 4.5% CET1 to 8% Own Funds) outweighs the increased amount of capital included into own funds, i.e. additional tier 1 and tier 2 instruments. Overall, the results seem to be almost invariant to the selection of the capital base. In both exercises, the findings remain unchanged relative to the benchmark case.

6.3 Liquidity dynamics

The third exercise aims to test the sensitivity of the results to a deterioration of banks’ liquidity position. In this respect, the key liquidity parameters, respectively the funding shortfall i), the net liquidity position i) and the pool of assets available for sale i) are adjusted to make the funding shock more severe conditional on a default event. For the funding shortfall, two alternatives are tested: (i) raising the maturity assumption related to the calculation of short-term funding from one month as in the benchmark case to 3 months, resulting in sample averages of 35 percent and 40 percent, respectively; and (ii) raising the short-term funding rate to 50 percent uniformly. As Figure 16 (Panel a) shows, these changes produce a negligible effect on the contagion index, with only two banks on the contagion side facing a relevant positive adjustment.41 This might be attributed to the observation that the current liquidity surplus i) is large enough to cover the additional short-term liquidity needs. Next, we suppose a reduction of 20% and 40% in the net liquidity position i) for instance due to a sudden depletion of the buffer of HQLAs or due to a higher run off rate of deposits (Figure 16, Panel b). In this case, results are almost unaffected for both indicators, implying that the bank-specific liquidity buffer is well above the threshold needed to cover short-term liquidity needs. Finally, we assume that the pool of assets available for sale faces a reduction of 20% and 40% respectively (Figure 16, Panel c). Results are unchanged because no other banks, except those previously triggering the fire sales stage, are liquidity constrained and therefore no additional losses or failures are accounted in the system given this assumption.42

Figure 16.
Figure 16.

Sensitivity to Funding and Liquidity Measures

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Benchmark model refers to fully calibrated model presented in subsections 5.1 and 5.2.

6.4 Network structure

The fourth sensitivity analysis aims at capturing the role played by the network structure. In achieving this, we exploit additional granular exposure-level information from COREP template C.28 regarding bilateral linkages. Hence, we test the sensitivity of the contagion and vulnerability scores to a network structure based on total gross amounts (including exemptions), an overall increase of about 30 percent. As we can see in Figure 17 (Panel a), when we consider the network structure based on gross amounts (€ 1120 billion), both contagion and vulnerability indices, as expected, increase. However, these effects seem to be strong and distributed unevenly pointing to non-linearities. In fact, the average amplification ratio for the top 50 banks increase to 0.5 (from 0.1 in the benchmark case) raising the number and profile of tipping points in the system. Several banks rise from the lower half of the contagion index ranking to the top of the list, depleting almost 3-4% of the capital of the system given their default while the average contagion losses go up by 40 percent in the system resulting in a total 37 contagion defaults (almost twice compared to the benchmark).

Figure 17.
Figure 17.

Sensitivity to Network Structure and Parameter Interaction, and Comparison with Market-based Indicators

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note 1: Network structure is based on the statistics presented in Table 10 regarding gross exposures amounts before deducting exemptions. Benchmark refers to results presented in subsections 5.1 and 5.2.Note 2: The counterfactual scenarios has been modelled as follows: liquidity scenario with Δρi = 40%, ΔNLPi = -20%, and θi = -20%; the capital scenario with a 20% decrease in the capital surplus; and network scenario (Exp) with a 30% increase (€ 1120 billion) in exposure amounts.Note 3: Regarding the SRISK ranking, bank number 1 is the one with the highest SRISK estimate, i.e. the one in the top-right corner of the SRISK Index. MES based SRISK estimates for European banks have been retrieved from vlab.stern.nyu.edu.

6.5 Parameter interaction

The fifth exercise consists of testing the sensitivity of the results to the interaction of different parameters as in Kok and Montagna (2016): liquidity, solvency and network topology. In this respect, we assume first a full-liquidity shock affecting all parameters as in the third sensitivity analysis, but contemporaneously: i = 40%; Δγi = -20%; Δθi = -20%). Then we decrease by 20% the capital surplus, and we proportionally increase by 30% the exposure amounts which correspond to the gross amount presented in the fourth sensitivity analysis (€ 1120 billion). As we can see from Figure 17 (Panel b), the interplay of liquidity parameters pushes up the contagion and vulnerability indices of some banks, but does not result in a substantial system-wide increase. When this effect is combined with lower levels of capital surplus, more banks register higher indices, albeit, without a significant systemwide change. When, in addition, the size of all bilateral exposures are increased, more non-linear effects are produced by the interaction with liquidity and solvency parameters. This exercise once more highlights the role played by tipping points within a network and how bank-specific and exposure-specific characteristics may determine non-linear amplification effects resulting in a higher degree of systemic risk.

6.6 Market-based measures

Finally, we compare our model-based estimates of contagion and vulnerability indices to market data-based measures of individual banks’ systemic footprint. We use the SRISK index based on Global Dynamic Marginal Expected Shortfall (MES) retrieved from V-lab and based on Acharya et al. (2012).43 For this purpose, we download and adjust the SRISK index for the European Union such that it overlaps with our sample of euro area banks. By doing this, we end up with a common sample of 40 banks.44 Then we construct a SRISK index based on our model-based estimates by dividing the numerator of the vulnerability index (total losses experienced by each bank) by the total losses experienced by the system. By doing this, we are able to derive the proportional contribution of each firm’s SRISK to the total positive SRISK of the euro area banking system. Figure 17 (Panel c) depicts both SRISK indices and the rankings based on the SRISK index. Both measures display a high correlation, of 0.85 for the SRISK index and 0.74 for the SRISK ranking, respectively. As can be seen in the bottom left of the SRISK ranking, for the top-10 banks our balance-sheet based approach captures the very same banks of the SRISK approach based on MES. Overall, we find divergences between market-based measures such as SRISK and balance sheet-based measures such as our approach. Notwithstanding the overall correspondence (high correlation) between our VI measure and SRISK is reassuring, we identify notable differences for individual banks in the middle of the distribution. This is potentially due to our modelling strategy which take into account bank-specific characteristics which are only implicitly reflected in the market-based measures such as SRISK.

7. Macroprudential Policy Calibration

7.1 Fine-turning prudential measures

The modelling framework can also be used to conduct ex-ante impact analysis of prudential policy measures based on counterfactual analyses. By exploiting the breakdown of the vulnerability index into credit and funding risks, we are able to target banks with specific liquidity and capital vulnerabilities that may give rise to contagion.

For illustrative purposes, our counterfactual (macro) prudential simulations consist of (i) increasing the buffer of HQLA assets i), (ii) increasing the pool of available for sale assets i), and/or (iii) increasing the CET1 capital surplus (kiDF). The three distinct policy measures aim at assessing the effectiveness of a single and equal increase in each of the parameter in terms of reducing the number of banks experiencing a liquidity or solvency default in Table 4.

Specifically, we consider a mix of the three policy measures, which is based on a simple optimization problem for which we want to minimize the default frequency (DF) – number of defaults experienced – so as to reduce the losses incurred due to second-round effects (Equation 21). This optimization problem is subject to an inequality constraint which establishes that the sum of the parameters may not be larger than a certain buffer (F) and each parameter should be at least equal to or higher than the starting value γ¯,θ¯,kDF¯ derived by the bank-specific calibration.

minDF(γi,θi,kiDFi)(29)s.t. ϑ1kiDF+ϑ2γi+ϑ3θiF; kiDFkDF¯, γiγ¯, θiθ¯.

For simplicity, we set ϑ1 = ϑ2 = ϑ3 = 1. Figure 18 (Panels a and b) presents the default frequency and contagion defaults for the top-80 and top-40 banks with the highest number of defaults experienced and induced, respectively. For the presentation of results, the banks are grouped into four equal buckets capturing twenty (ten) banks each in a descending order from most vulnerable (contagious) to less.45 The benchmark case reports the benchmark results for comparative purposes.

Figure 18.
Figure 18.

Comparative Statistics of Policy Options

Citation: IMF Working Papers 2019, 102; 10.5089/9781498312073.001.A001

Note: Benchmark model refers to fully calibrated model presented in subsections 5.1 and 5.2.

The first exercise (LCR adj.) tests the effectiveness of an increase in the liquidity buffer i) by 25 billion so as to increase the LCR ratio and better absorb the funding short-fall induced by a default event.46 Since we are able to disentangle which banks suffer liquidity and solvency defaults, we treat with this policy experiment 8 banks experiencing a default event. This experiment decreases the number of defaults by 16 units (red bars) out of 20 reported initially in the benchmark case (black bars). The effectiveness of this treatment is relatively optimal, since the 4 remaining failures are all solvency driven.

The second exercise (Pool adj.) resembles the first exercise and tests the effectiveness in curbing liquidity driven defaults of an increase of the pool of assets available for sale by 25 billion. As captured by the blue bars, this policy rule is less effective than an equal increase of the HQLA buffer, nevertheless it still reduces the number of liquidity driven defaults by 11 units.

The third exercise (CET1 adj.) increases the average capital surplus by 25 billion. Applying this policy measure reduces the number of defaults by only 6 units, although the number of solvency driven defaults declines to 0.

The optimal policy mix set up by the minimization of the default frequency is able to bring the total number of defaults among the top-80 most vulnerable banks to 0. Given the set-up of the model, as described in Equation 21 (), the most effective allocation is divided between an increase in the liquidity surplus (γi) for those banks facing liquidity driven defaults and an increase in the CET1 capital surplus for those banks facing capital shortfalls. No fund is allocated to the pool of assets because each amount used to absorb the funding shortfall will be discounted by a bank-specific discount rate, leading to a less effective outcome than in the case of an equal increase in the buffer of HQLA assets to which no discount rate is applied by assumption.47

Overall, the policy mix is able to bring the contagion defaults induced by the top-40 most systemic banks to zero. However, as shown in Figure 18 (Panels c and d), which reports the vulnerability and contagion indices, the scale of losses induced and experienced even after the policy mix treatment still presents a fat-tailed distribution. This emphasizes that decreasing the number of cascade defaults reduces contagion and vulnerability indices by the contribution of amplification effects, which in this exercise is limited. In this regard, first round effects dominate losses incurred due to cascade defaults.48 This finding has important policy implications which will be addressed in the next section.

A final remark on the counterfactual policy simulations is that they focus solely on the benefits related to curbing contagion risk in the system by making vulnerable banks more resilient. What is however not considered are the potential costs of imposing more stringent requirements on banks (e.g. requiring higher liquidity and or capital buffers). In practice, policymakers need to conduct a cost-benefit analysis considering also the costs of increased regulation, for instance, in terms of lower intermediation activity in the interbank market and other market segments where banks interact.

7.2 Policy discussion

This analysis is based on a comprehensive information set of the euro area banks’ large exposures vis-à-vis the global banking system. Although this is arguably the most extensive dataset in the bank network literature, policy implications based on contagion and vulnerability indicators should be interpreted carefully. In fact, both our dataset and modelling strategy are developed to study systemic risk emanating from a bank’s default with a euro area centric perspective. This is important to underline because our results are solely based on contagion within the banking system. The inclusion of the rest of the euro area banking system, or other sectors such as non-bank financials and non-financial corporations would change the amount of estimated losses and likely the relative contagiousness and vulnerability rankings. In this regard, our estimated losses could be considered in the lower end of the range appropriate as a baseline point.

The exercise in this paper is based on a deterministic model, where the initial shock captures the hypothetical failure of a given bank in the network. The exercise then measures potential systemic losses and compares the banks on a systemic risk map (Figure 11). This analysis allows policy-makers to identify banks that can generate large system-wide losses based on this framework and consider supplementing bank-specific systemic risk buffers providing incentives for them to reduce their systemic importance.

Our findings suggest that regulators should look at the interplay of network topology and bank-specific characteristics in addition to setting prudential requirements based on individual supervisory assessments of each bank in isolation. Tipping points shifting the financial system from a less vulnerable state to a highly vulnerable state are a non-linear function of the combination of network structures and bank-specific characteristics.49 While it is critical to identify systemic banks, policies aiming at reducing systemic risk should also focus on increasing the resilience of weak nodes in the system by curbing potential amplification effects due to contagion defaults. Such measures, when combined, could provide incentives to reduce the concentration of asset or liability exposures reshaping the network structure in a way that the system is less prone to shocks.

The last exercise shows that prudential buffers both targeting solvency and liquidity defaults could be calibrated at bank-level in order to minimize amplification effects due to contagion. Second-round default events are the key determinant of non-linear effects through loss amplification, thereby becoming a natural candidate as an intermediate policy target for macroprudential supervisors. In other words, the macroprudential regulator could aim at reducing the role played by the network structure in terms of spreading contagion and exposing the vulnerability of banks to shocks hitting the network. This paper proposes a methodology to capture such amplification effects as reflected in the contagion and vulnerability indexes (Figure 11) and their determinants (Table 3 and Table 4). Moreover, as we illustrate below, the CoMap methodology can be used to run counterfactual simulations to study the effectiveness of different prudential actions in reducing contagion potential in the network.

In the end, to foster international cooperation during times of globalization, we have shown how regional incentives may not lead to an optimal outcome in terms of reduction of systemic risk externalities. Due to increased cross-border exposures, a bank failure may induce more losses outside than within its own jurisdiction, thereby not justifying a full recapitalization at national level. From the point of view of a global or euro area central planner, the cost-benefit trade-off would weigh more on the side of intervention. This emphasizes that the current architecture of the financial system could be enhanced also by improving coordination ex-post, by striving for a Pareto efficient allocation of resources. In this regard, the Global Financial Crisis has been the testimony of the relevance of this system failure, especially in times when the functioning of the financial sector is already constrained (Acharya et. al, 2017).

8. Conclusion

Our euro-centric systemic risk assessment based on the network of euro area banks’ large exposures within the global banking system highlights that the degree of bank-specific contagion and vulnerability depend on network specific tipping points affecting directly the magnitude of amplification effects. This leads to the conclusion that the identification of such tipping points and their determinants is the essence of an effective micro and macro prudential supervision. The current financial regulations seek to limit each institution’s risk in isolation underestimating the contribution of systemic risk to the overall fragility.

In this paper, we argue that in isolation, variations in bank-specific characteristics seem to play a lesser role than the network structure in changing the degree of amplification effects (non-linearities). Large and uneven shifts across banks in contagion and vulnerability indicators observed when changes in bank-specific characteristics are combined with changes in network structure point to the importance of non-linearities arising from their interactions and their heterogenous impact on banks. In a variety of tests, heterogeneity in the magnitude of bilateral exposures and of bank-specific parameters is detected as a key driver of the total number of defaults in the system. Unless systemic risk externalities are internalized by each bank in the network, bank recapitalizations may still be acceptable from a cost-benefit trade-off angle for a global or European central planner. It follows that international cooperation is essential to limit the ex-ante uncertainty and reduce the ex-post system-wide losses striving for a Pareto efficient outcome.

Several extensions of our work should be explored in the future. The results are network and model dependent based on an incomplete set of bilateral exposures. Therefore, both dimensions need to be extended to include additional channels of contagion and in turn improve the loss estimates of an extreme event. As a natural step, work should be done to incorporate: (i) euro area less significant institutions to complete the euro area banking system, (ii) financial corporations to model the complex interactions within the financial system, and (iii) exposures to real economy to capture feedback loops. Moreover, the missing interlinkages among the extra-euro area entities could be imputed by generating random networks consistent with partial information as in Halaj and Kok (2013). Without changing the model assumptions, enlarging the dataset dimension could improve contagion mapping and, hence, estimation of the amplification effects.

In terms of modelling strategy, the analysis would benefit from including a confidence channel to capture liquidity-hoarding behaviors. This feature should bring funding risks to the forefront and its contribution to systemic losses potentially underestimated with the current approach, which is mainly credit risk driven. While we estimate fire-sale losses under a static approach, another way to model them is dynamically by exploiting information on cross-holdings of assets and derive the discount rate endogenously a la Cont and Schaanning (2017). It is also important to investigate the role of additional prudential requirements currently missing in our framework such as binding leverage and net stable funding ratios, which are being implemented according to the internationally agreed Basel standards. Finally, given the significance of network structure in determining contagion risks, it would be important to study how the network structure changes over time and in response to systemic shocks as well as how different network structures impact model results (see Acemoglu et al., 2015, Elliott et al., 2014). In this respect, it would enrich our understanding of financial networks, an area explored mostly from a theoretical perspective so far.

In conclusion, uncertainty surrounding the global financial network requires regulators to handle an ever-complex set of information so to be prepared in case such an adverse event takes place. Nevertheless, networks are adaptive and so the policy mix needs to be. We have provided a framework to capture some features of such complexity, and many more need to be modelled to clear the fog of uncertainty, and take a decision when dangers to financial stability suddenly emerge.

References

  • Acemoglu, D., A. Ozdaglar, and A. Tahbaz-Salehi, (2015). “Systemic Risk and Stability in Financial Networks,American Economic Review 105(2), 564608.

    • Search Google Scholar
    • Export Citation
  • Acharya V.V., R. Engle, and M. Richardson (2012), “Capital Shortfall: A New Approach to Ranking and Regulating Systemic Risk”, American Economic Review: Papers & Proceedings 102(3): 5964

    • Search Google Scholar
    • Export Citation
  • Acharya V.V., L.H. Pedersen, T. Philippon, and M. Richardson (2017), “Measuring Systemic Risk”, The Review of Financial Studies 30(1):247.

    • Search Google Scholar
    • Export Citation
  • Aikman D., Haldane A., Hinterschweiger M., and S. Kapadia (2018), “Rethinking Financial Stability”, Staff Working Paper No. 712, Bank of England

    • Search Google Scholar
    • Export Citation
  • Allen, F., and E. Carletti (2006). “Credit Risk Transfer and Contagion,Journal of Monetary Economics 53, 89111.

  • Allen, F. and Gale, D. (2000). Financial contagion. Journal of Political Economy, 108, 133.

  • Alter, A. and A. Beyer (2013), “The dynamics of spillover effects during the European sovereign debt turmoil”, Journal of Banking and Finance 42: 134153.

    • Search Google Scholar
    • Export Citation
  • Anand K., B. Craig and G. von Peter (2014), “Filling in the Blanks: Network Structure and Interbank Contagion”, Discussion Paper 02/2014, Deutsche Bundesbank

    • Search Google Scholar
    • Export Citation
  • Albert, R. and A.-L. Barabasi (2002) “Statistical mechanics of complex networks,Reviews of Modern Physics, 74(1): 4797.

  • Bargigli L., F. Lillo, L. Infante and F. Pierobon (2015), “The Multiplex Structure of Interbank Networks”, Quantitative Finance 15(4):673691

    • Search Google Scholar
    • Export Citation
  • Basu S., S. Das, G. Michailidis, A. Purnanandam (2017), A system-wide approach to measure connectivity in the financial sector. Unpublished working paper.

  • Battiston S., J. Lorenz, and F. Schweitzer (2009), “Systemic Risk in a Unifying Framework for Cascading Processes on Networks”, The European Physical Journal B, 71(4): 441460.

    • Search Google Scholar
    • Export Citation
  • Battiston, S., Delli Gatti, D., Gallegati, M., Greenwald, B., Stiglitz, J.E., (2012). “Default cascades: when does risk diversification increase stability?Journal of Financial Stability 8, 138149.

    • Search Google Scholar
    • Export Citation
  • Battiston S. and S. Caldarelli (2012), “Systemic Risk in Financial Networks”, Journal of Financial Management Markets and Institutions, 1(2): 129254.

    • Search Google Scholar
    • Export Citation
  • Billio M., M. Getmansky, A. W. Lo, and L. Pelizzon (2012), “Econometric Measures of Connectedness and Systemic Risk in the Finance and Insurance Sectors”, Journal of Financial Economics 104 (3): 53559

    • Search Google Scholar
    • Export Citation
  • BIS – Bank for International Settlements (2014), Supervisory framework for measuring and controlling large exposures. Standards, Basel Committee on Banking Supervision

    • Search Google Scholar
    • Export Citation
  • BIS – Bank for International Settlements (2018), Basel Committee on Banking Supervision, Global Systemically Important Banks – Revised Assessment Methodology and the Higher Loss Absorbency Requirement (July 2018)

    • Search Google Scholar
    • Export Citation
  • Boss, M., H. Elsinger, M. Summer and S.