Consistent Quantitative Operational Risk Measurement and Regulation: Challenges of Model Specification, Data Collection and Loss Reporting
Author: Andreas Jobst

Contributor Notes

Author’s E-Mail Address: ajobst@imf.org

Amid increased size and complexity of the banking industry, operational risk has a greater potential to transpire in more harmful ways than many other sources of risk. This paper provides a succinct overview of the current regulatory framework of operational risk under the New Basel Capital Accord with a view to inform a critical debate about the influence of varying loss profiles and different methods of data collection, loss reporting, and model specification on the reliability of operational risk estimates and the consistency of risk-sensitive capital rules. The presented findings offer guidance on enhanced market practice and more effective prudential standards for operational risk measurement.

Abstract

Amid increased size and complexity of the banking industry, operational risk has a greater potential to transpire in more harmful ways than many other sources of risk. This paper provides a succinct overview of the current regulatory framework of operational risk under the New Basel Capital Accord with a view to inform a critical debate about the influence of varying loss profiles and different methods of data collection, loss reporting, and model specification on the reliability of operational risk estimates and the consistency of risk-sensitive capital rules. The presented findings offer guidance on enhanced market practice and more effective prudential standards for operational risk measurement.

I. INTRODUCTION

1. While financial globalization has fostered higher systemic resilience due to more efficient financial intermediation and greater asset price competition, it has also complicated banking regulation and risk management in banking groups. Given the increasing sophistication of financial products, the diversity of financial institutions, and the growing interdependence of financial systems, globalization increases the potential for markets and business cycles to become highly correlated in times of stress and makes crisis resolution become more intricate while banks are still lead-regulated at a national level. At the same time, the deregulation of financial markets, the growing complexity in the banking industry, large-scale mergers and acquisitions, as well as greater use of outsourcing arrangements have raised the susceptibility of banking activities to operational risk.

2. Operational risk has a greater potential to transpire in more harmful ways than many other sources of risk, given the increased size and complexity of the banking industry. It is commonly defined as the risk of some adverse outcome resulting from acts undertaken (or neglected) in carrying out business activities, inadequate or failed internal processes and information systems, misconduct by people or from external events and shocks.1 Although operational risk has always existed as one of the core risks in the financial industry, it is now becoming an ever more salient feature of risk management in the presence of new threats to financial stability from higher geopolitical risk, poor corporate governance, and systemic vulnerabilities from a slush of financial derivatives. Especially technological advances have spurred rapid financial innovation and the proliferation of financial products, which involve several business lines and entail greater reliance of banks on services and systems susceptible to heightened operational risk, such as e-banking and automated processing.

3. Against this background, concerns about the soundness of traditional operational risk management (ORM) practices and techniques, and limited capacity of regulators to address these challenges within the scope of existing regulatory provisions, have prompted the Basel Committee on Banking Supervision to introduce capital adequacy guidelines of operational risk in its recent overhaul of the existing capital rules for internationally active banks.2 As the revised banking rules on the International Convergence of Capital Measurement and Capital Standards (or short “Basel II”) move away from rigid controls towards enhancing efficient capital allocation through the disciplining effect of capital markets, improved prudential oversight, and risk-based capital charges, banks are now facing more rigorous and comprehensive risk measurement requirements (Basel Committee, 2004a, 2005a and 2006b).

4. The new regulatory provisions link minimum capital requirements closer to the actual riskiness of bank assets in a bid to redress shortcomings in the old system of the overly simplistic 1988 Basel Capital Accord. While the old capital standards for calculating bank capital were devoid of any provisions for exposures to operational risk and asset securitization, the new, more risk-sensitive regulatory capital rules include an explicit capital charge for operational risk, which has been defined in a separate section of the new supervisory guidelines based on previous recommendations in the Consultative Document on the Regulatory Treatment of Operational Risk (2001d), the Working Paper on the Regulatory Treatment of Operational Risk (2001c) and the Sound Practices for the Management and Supervision of Operational Risk (2001a, 2002 and 2003b).

5. The implementation of New Basel Capital Accord in the U.S. underscores the particular role of operational risk as part of the new capital rules. On February 28, 2007, the federal bank and thrift regulatory agencies published the Proposed Supervisory Guidance for Internal Ratings-based Systems for Credit Risk, Advanced Measurement Approaches for Operational Risk, and the Supervisory Review Process (Pillar 2) Related to Basel II Implementation (based on a previous advanced notices on proposed rule-making in 2003 and 2006). These supervisory implementation guidelines of the New Basel Capital Accord thus far require some and permit other qualifying banking organizations (mandatory and “opt-in”)3 to adopt Advanced Measurement Approaches (AMA) for operational risk (together, the “advanced approaches”) as the only acceptable method of estimating capital charges for operational risk. The proposed guidance also establishes the process for supervisory review and the implementation of the capital adequacy assessment process under Pillar 2 of the new regulatory framework. Other G-7 countries, such as Germany, Japan, and the United Kingdom have taken similar measures as regards a qualified adoption of capital rules and supervisory standards for operational risk measurement.

6. The following paper first reviews the current regulatory framework of operational risk under the New Basel Capital Accord. Given the inherently elusive nature of operational risk and considerable cross-sectional diversity of methods to identify operational risk exposure, the paper informs a critical debate about two key challenges in this area: (i) the accurate estimation of asymptotic tail convergence of extreme operational risk events, and (ii) the consistent definition and implementation of loss reporting and data collection across different areas of banking activity in accordance with the New Basel Capital Accord. The paper explains the shortcomings of existing LDA models and examines the structural and systemic effects of heterogeneous data reporting on loss characteristics, which influence the reliability and comparability of operational risk estimates for regulatory purposes. The findings of this paper offer guidance and instructive recommendations for enhanced market practice and a more effective implementation of capital rules and prudential standards for operational risk measurement.

II. CURRENT PRACTICES OF OPERATIONAL RISK MEASUREMENT AND REGULATORY APPROACHES

7. The measurement and regulation of operational risk is quite distinct from other types of banking risks. Operational risk deals mainly with tail events rather than central projections or tendencies, reflecting aberrant rather than normal behavior and situations. Thus, the exposure to operational risk is less predictable and even harder to model, because extreme losses are one-off events of large economic impact without historical precedent. While some operational risk exposure follows from very predictable stochastic patterns, whose high frequency caters to quantitative measures, there are many other types of operational risk for which there is and never can be data to support anything but an exercise requiring subjective judgment and estimation. In addition, the diverse nature of operational risk from internal or external disruptions to business activities and the unpredictability of their overall financial impact complicate systematic measurement and consistent regulation.

8. The historical experience of operational risk events suggests a heavy-tailed loss distribution, i.e., there is a higher chance of an extreme loss event (with high loss severity) than the shape of the standard limit distributions would suggest. While banks should generate enough expected revenues to support a net margin to absorb expected losses (EL) from predictable internal failures, they also need to provision sufficient economic capital as risk reserves to cover the unexpected losses (UL) or resort to insurance/hedging agreements. If we define the distribution of operational risk losses as an intensity process of time t, the expected conditional probability EL (Tt) =E [p (T) − P (t) |p (T) − P (t) < 0] specifies EL over time horizon T, while the probability UL (Tt) =Pα (Tt) − EL (Tt) of UL captures losses larger than EL below a tail cut-off E [Pα (T) − P (t)], beyond which any residual or extreme loss (“tail risk”) occurs at a probability of a or less. The asymptotic tail behavior of operational risk reflects highly predictable, small loss events left of the mean with cumulative density of EL. Higher percentiles indicate a lower probability of extreme observations with high loss severity (UL). While EL attracts regulatory capital, the low frequency of UL exposure requires economic capital coverage. While banks should generate enough expected revenues to support a net margin that absorbs EL from various types of errors and predictable internal failures in all aspects of bank processes, they also need to maintain sufficient capital reserves to cover UL from large, one-off internal and external shocks or resort to insurance.

9. There are three major concepts of operational risk measurement: (i) the volume-based approach, which assumes that operational risk exposure is a function of the type and complexity of business activity, especially in cases when notoriously low margins (such as in transaction processing and payments-system related activities) have the potential to magnify the impact of operational risk losses; (ii) the comprehensive qualitative self-assessment of operational risk with a view to evaluate the likelihood and severity of financial losses based on subjective judgment rather than historical precedent; and (iii) quantitative techniques, which have been developed by banks primarily for the purpose of assigning economic capital to operational risk exposures in compliance with regulatory capital requirements (see Box 1).

Operational Risk Management (ORM)

Many banks still rely on internal control processes, audit programs, insurance protection, and other risk management methods to identify, monitor and control operational risk based largely on qualitative assumptions and judgments. In such an environment, operational risk is managed by individual business units with little or no formality, process transparency or standardization (“silo approach”).

Over the recent past, the unprecedented scale of high-profile cases of substantial unexpected operational risk losses has reverberated in mounting unease about the soundness of traditional ORM practices. Amid regulatory efforts to re-examine the industry’s exposure to operational risk and its implications on efficient financial intermediation, some institutions have gone beyond traditional approaches in the effort to consolidate ORM in specialized departments or groups dedicated to the identification and control of exposures from particular aspects of operational processes and designated risk types, such as legal compliance, fraud or vendor management/outsourcing. Notwithstanding the merits of improved overall risk awareness associated with centralized ORM, this approach classifies operational risk along functional lines and negates the comprehensive measurement of operational risk in end-to-end processes.

Modern ORM integrates ad hoc self-assessment of conventional approaches into a formal, enterprise-wide oversight function, which designs and implements the ORM framework as a structure to identify, measure, monitor, and control or mitigate operational risk based on independent evaluation and quantitative methods (Alexander, 2003). An ORM framework defines common operational policies and guidelines on a corporate level concerning roles and responsibilities as well as uniform risk assessment processes, reporting protocols and quantification methodologies within an agreed range of risk tolerance (Basel Committee, 2005b and 2006b).

The formal treatment of operational risk ensures the consistent application of standard risk management practices in end-to-end processes, while the self-assessment of exposures by individual business units reinforces business line risk ownership and eschews functional segmentation of risk awareness. A well-integrated ORM framework helps develop more effective management process for the detection of potential operational risk exposures and the evaluation of adequate economic capital coverage commensurate to the overall risk profile.

10. The migration of ORM towards a modern framework has invariably touched off efforts to quantify operational risk as an integral element of economic capital models. These models comprise of internal capital measurement and management processes used by banks to allocate capital to different business segments based on their exposure to various risk factors (market, credit, liquidity and operational risk). Despite considerable variation of economic capital measurement techniques ranging from qualitative managerial judgments to comprehensive statistical analysis, capital allocation for operational risk tends to be mainly driven by the quantification of losses relative to explicit exposure indicators (or volume-based measures) of business activity, such as gross income, which reflect the quality and stability of earnings to support capital provisioning. As modern ORM evolves as a distinct discipline, the push for quantification techniques of operational risk within more advanced economic capital models coincides with a changing regulatory regime, which approaches international adoption in 2007.

11. Regulatory efforts have contributed in large parts to the evolution of quantitative operational risk measurement as a distinct discipline. The Operational Risk Subgroup (AIGOR) of the Basel Committee Accord Implementation Group defines three different quantitative measurement approaches in a continuum of increasing sophistication and risk sensitivity for the estimation of operational risk based on eight business lines (BLs) and seven event types (ETs)4 as units of measure (Basel Committee, 2003a). Risk estimates from different units of measure must be added for purposes of calculating the regulatory minimum capital requirement for operational risk. Although provisions for supervisory review (Pillar 2 of Basel II) allow signatory countries to select approaches to operational risk that may be applied to local financial markets, such national discretion is confined by the tenet of consistent global banking standards. The first two approaches, the Basic Indicator Approach (BIA) and the (traditional) Standardized Approach (TSA),5 define deterministic standards of regulatory capital by assuming a fixed percentage of gross income over a three-year period6 as a volume-based metric of unexpected operational risk exposure (see Table 1). BIA requires banks to provision a fixed percentage (15 percent) of their average gross income over the previous three years for operational risk losses, whereas TSA sets regulatory capital to at least the three-year average of the summation of different regulatory capital charges (as a prescribed percentages of gross income that varies by business activity) across BLs in each year (Basel Committee, 2003b). The New Basel Capital Accord also enlists the disciplining effect of capital markets (“market discipline” or Pillar 3) in order to enhance efficiency of operational risk regulation by encouraging the wider development of adequate management and control systems. In particular, the current regulatory framework allows banks to use their own internal risk measurement models under the standards of Advanced Measurement Approaches (AMA) as a capital measure that is explicitly and systematically more amenable to the different risk profiles of individual banks in support of more risk-sensitive regulatory capital requirements (see Box 2).

Table 1.

Overview of Operational Risk Measures According to the Basel Committee on Banking Supervision(2003a, 2004a, 2005b and 2006b).

article image

The three main approaches to operational risk measurement, the Basic Indicator Approach (BIA), the Standardized Approach (TSA) and the Advanced Measurement Approaches (AMA) are defined in Basel Committee (2005b).

Gross income (GI) is defined as net interest income plus net non-interest income. This measure should: (i) be gross of any provisions (e.g., for unpaid interest), (ii) be gross of operating expenses, including fees paid to outsourcing service providers, (iii) exclude realized profits/losses from the sale of securities in the banking book, and (iv) exclude extraordinary or irregular items as well as income derived from insurance.

12. Operational risk measurement via AMA is based on the quantitative self-assessment (through internal measurement models) of the frequency and loss severity of operational risk events and represents the most flexible regulatory approach, subject to several qualitative and quantitative criteria and soundness standards.7 While the qualitative criteria purport to ensure the integrity of a sound internal operational risk measurement system for adequate risk management and oversight, the quantitative aspects of AMA define regulatory capital as protection against both EL and UL from operational risk exposure at a soundness standard consistent with a statistical confidence at the 99.9th percentile8 over a one-year holding period.9 Although the Basel Committee does not mandate the use of a particular quantitative methodology, it defines the use of (i) internal data, (ii) external data, (iii) scenario analysis, and (iv) Business Environment and Internal Control Factors (BEICFs) as quantitative elements of the estimation of operational risk under AMA.10

13. The loss distribution approach (LDA) has emerged as one of the most expedient statistical methods to calculate the risk-based capital charge for operational risk in line with these four quantitative criteria of AMA. LDA defines operational risk as the aggregate loss distribution derived from compounding empirical and/or estimated loss severity by the estimated frequency of operational risk events under different scenarios (see Figure 2). The definition of UL in the context of LDA of operational risk concurs with the concept of Value-at-Risk (VaR),11 which estimates the maximum loss exposure at a certain probability bound. However, the rare incidence of severe operational risk losses defies statistical inference when measurement methods estimate maximum loss based on all data points of the empirical loss distribution. Therefore, conventional VaR is rather ill-suited metric for operational risk and warrants adjustment so that extremes are explicitly accounted for. Therefore, generalized parametric distributions within the domain of extreme value theory (EVT) complement VaR measures.

The Evolution of the Advanced Capital Adequacy Framework for Operational Risk

The current regulatory framework (“New Advanced Capital Adequacy Framework”) for operational risk is defined in the revisions on the International Convergence of Capital Measurement and Capital Standards (Basel Committee, 2004a, 2005b and 2006b) and supplementary regulatory guidance contained in the Consultative Document on the Regulatory Treatment of Operational Risk (2001d), the Working Paper on the Regulatory Treatment of Operational Risk (2001c) and the Sound Practices for the Management and Supervision of Operational Risk (2001a, 2002 and 2003b). As opposed to the old Basel Capital Accord, the new capital rules require banks to estimate an explicit capital charge for their operational risk exposure in keeping with the development of a more risk-sensitive capital standards.

The Basel Committee first initiated work on operational risk in September 1998 (Basel Committee, 1998) (see Figure 1), when it published—among other findings—results of an informal industry survey on the operational risk exposure in various types of banking activities in A New Capital Adequacy Framework (1999). In January 2001, the Basel Committee (2001d) released its first consultative document on operational risk, followed by the Working Paper on the Regulatory Treatment of Operational Risk (2001c), which was prepared by the Risk Management Group and first draft implementation guidelines for Sound Practices for the Management and Supervision of Operational Risk (2001a) after an initial round of industry consultations. Theses supervisory principles established the first regulatory framework for the evaluation of policies and practices of effective management and supervision of operational risk.

Figure 1
Figure 1

The Evolution of the Regulatory Treatment of Operational Risk.

Citation: IMF Working Papers 2007, 254; 10.5089/9781451868173.001.A001

Figure 2
Figure 2

Loss Distribution Approach (LDA) for AMA of Operational Risk Under the New Basel Capital Accord

Citation: IMF Working Papers 2007, 254; 10.5089/9781451868173.001.A001

In the next round of consultations on a capital charge for operational risk, the Basel Committee examined individual operational risk loss events, the banks’ quarterly aggregate operational risk loss experience, and a wider range of potential exposure indicators tied to specific BLs in order to calibrate uniform capital charges (Basic Indicator (BIA) and Standardized Approaches).12 Subsequent revisions of the Sound Practices for the Management and Supervision of Operational Risk in July 2002 and February 2003 (Basel Committee, 2002 and 2003b) concluded the second consultative phase.

After the third and final round of consultations on operational risk from October 2002 to May 2003, the Basel Committee presented three methods for calculating operational risk capital charges in a continuum of increasing sophistication and risk sensitivity (Basic Indicator Approach (BIA), (traditional) Standardized Approach (TSA), and Advanced Measurement Approaches (AMA)) to encourage banks to develop more sophisticated operational risk measurement systems and practices based on broad regulatory expectations about the development of comprehensive control processes. Banks were allowed to choose a measurement approach appropriate to the nature of banking activity, organizational structure, and business environment subject to the discretion of national banking supervisors, i.e., supervisory review (Pillar 2 of Basel II).

The Third Consultative Paper (or “CP 3”) in April 2003 (Basel Committee, 2003b) amended these provisions by introducing the Alternative Standardized Approach (ASA), which was based on a measure of lending volume rather than gross income as indicator of operational risk exposure from retail and commercial banking. Additionally, compliance with the roll-out provisions for operational risk was made substantially more difficult by hardened qualifying criteria for the Standardized Approach, which shifted the regulatory cost benefit analysis of banks with less sophisticated ORM systems in favor of BIA.

At the same time, the ability of national regulators to exercise considerable judgment in the way they would accommodate the new capital rules in their local financial system conjured up a delicate trade-off between the flexibility and consistency of capital rules for operational risk across signatory countries. Although national discretion was precluded from overriding the fundamental precepts of the new regulatory framework, the scope of implementation varied significantly by country.13 Some national banking supervisors selected only certain measurement approaches to operational risk for the implementation of revised risk-based capital standards. For example, in the Joint Supervisory Guidance on Operational Risk Advanced Measurement Approaches for Regulatory Capital (2003), U.S. banking and thrift regulatory agencies14 in the U.S. set forth that AMA would be the only permitted quantification approach for U.S.-supervised institutions to derive risk-weighted assets under the proposed revisions to the risk-based capital standards (Zamorski, 2003). These provisions were eventually endorsed in the Notice of Proposed Rulemaking (NPR) regarding Risk-Based Capital Guidelines: Internal Ratings-Based Capital Requirement (2006b).15

The pivotal role of supervisory review for the consistent cross-border implementation of prudential oversight resulted also in a “hybrid approach” about how banking organizations that calculate group-wide AMA capital requirements might estimate operational risk capital requirements of their international subsidiaries. According to the guidelines of the Home-Host Recognition of AMA Operational Risk (Basel Committee, 2004b) a significant internationally active subsidiary of a banking organization that wishes to implement AMA and is able to meet the qualifying quantitative and qualitative criteria would have to calculate its capital charge on a stand-alone basis, whereas other internationally active subsidiaries that are not deemed to be significant in the context of the overall group receive an allocated portion of the group-wide AMA capital requirement.16 Significant subsidiaries would also allowed to utilize the resources of their parent or other appropriate entities within the banking group to derive their operational risk estimate.17

On February 7, 2007, the Basel Committee augmented the existing guidelines related to the information sharing and capital allocation underpinning the home-host recognition concept. The consultative document Principles for Home-host Supervisory Cooperation and Allocation Mechanisms in the Context of Advanced Measurement Approaches (AMA) (Basel Committee, 2007) set forth principles that (i) establish a regulatory framework for information sharing in the assessment and approval of AMA methodologies and responsibilities of banks in the area of information sharing (including the factors influencing information sharing, as well as its scope, frequency and mechanics) and (ii) promote the development and assessment of allocation mechanisms incorporated as part of a hybrid AMA in terms of risk sensitivity, capital adequacy, subsidiary level management support, integration into Pillar 1, stability, implementation, documentation, internal review and validation, and supervisory assessment.

In June 2004, the Basel Committee released the first definitive rules on the regulatory treatment of operational risk as an integral part of its revised framework for the International Convergence of Capital Measurement and Capital Standards (2004a). In keeping with provisions published earlier in Third Consultative Paper (Basel Committee, 2003b), the Committee stressed the importance of scenario analysis of internal loss data, business environment and exogenous control factors for operational risk exposure, as well as the construction of internal measurement models to estimate unexpected operational risk losses at the critical 99.9th percentile. The first comprehensive version of the New Basel Capital Accord prompted several banking regulators to conduct further national impact studies or field tests independent of the Basel Committee on Banking Supervision.

In the U.S., the Federal Financial Institutions Examination Council (FFIEC),18 the umbrella organization of U.S. bank and thrift regulatory agencies, jointly initiated the Loss Data Collection Exercise (LDCE)19 from June to November 2004 in a repeat of earlier surveys in 2001 and 2002. The LDCE was conducted as a voluntary survey20 that asked respondents to provide internal operational risk loss data over a long time horizon (through September 30, 2004), which would allow banking regulators to examine the degree to which different operational risk exposure (and their variation across banks) reported in earlier surveys were influenced by the characteristics of internal data or exogenous factors that institutions consider in their quantitative methods of modeling operational risk or their qualitative risk assessments. The general objective of the LDCE was to examine both (i) the overall impact of the new regulatory framework on U.S. banking organizations, and (ii) the cross-sectional sensitivity of capital charges to the characteristics of internal loss data and different ORM systems.

After a further round of consultations between September 2005 and June 2006, the Basel Committee defined The Treatment of Expected Losses by Banks Using the AMA Under the Basel II Framework (Basel Committee, 2005b), before it eventually released the implementation drafting guidelines for Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework (Basel Committee, 2005a). These guidelines were issued again in Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework - Comprehensive Version (Basel Committee, 2006b).21 In its latest publication on Observed Range of Practice in Key Elements of Advanced Measurement Approaches (AMA) (Basel Committee, 2006a), the Basel Committee describes emerging industry practices in relation to some of the key challenges in areas of internal governance, data modeling, and benchmarking exercises banks face in their efforts to adopt AMA standards.

III. THE MAIN MEASUREMENT CHALLENGES OF LOSS DISTRIBUTION APPROACH (LDA)

14. Since the incidence and relative magnitude of operational risk varies considerably across banks depending on the nature of business activities and the sophistication of internal risk measurement, systems and controls, the efficient ORM hinges on several issues: (i) the judicious combination of qualitative and quantitative methods of risk estimation, (ii) the robustness of these models, given the rare incidence of high-impact operational risk events without historical precedence, (iii) the sensitivity of regulatory capital charges to the varied nature of operational risk and reporting standards across different business activities, as well as (iv) the risk of conflicting regulatory measures across countries as national supervisors follow different paths of supervisory review in implementing the New Basel Capital Accord.

15. Amid a more risk-sensitive regulatory framework with broadly defined expectations of ORM and greater discretion of banks to tailor prescribed measurement standards to their specific organizational structure, business activity, and economic capital models under the new capital rules, two main challenges emerge for the implementation of LDA and the calculation of UL risk estimates under AMA:

  1. the accurate estimation of asymptotic tail convergence of extreme losses at the 99.9th percentile level as defined by the quantitative AMA criteria and soundness standards (i.e., shortcoming of quantitative assessments), and

  2. the consistent and coherent implementation of data collection and loss reporting across different banks and areas of banking activity (i.e., units of measure), as risk and control self-assessment (RCSA) vary in response to different business environment and internal control factors (BEICFs) (i.e., shortcomings caused by data collection).

16. In their effort to adapt to the reality of an explicit capital charge for operational risk under the New Basel Capital Accord, banks have traditionally accorded more attention to optimal specification of tail dependence and reliable calculation of risk estimates. Most research on operational risk in recent past has focused either on the quality of quantitative measurement methods of operational risk exposure (Makarov, 2006, Degen et al., 2006; Mignola and Ugoccioni, 2006 and 2005; Nešlehová et al., 2006; Grody et al, 2005; de Fontnouvelle et al., 2004;Moscadelli, 2004; Alexander, 2003; Coleman and Cruz, 1999; Cruz et al., 1998) or theoretical models of economic incentives for the management and insurance of operational risk (Leippold and Vanini, 2003, Crouhy et al., 2004; Banerjee and Banipal, 2005). Banking regulators are equally concerned with an evenhanded implementation of operational risk measurement standards within the banking system in accordance with the precepts of the supervisory review process. However, only little attention has been devoted to modeling constraints and statistical issues of operational risk reporting and measurement, which threaten to undermine a coherent and consistent regulatory framework (Dutta and Perry, 2006; Currie, 2004 and 2005).

A. Shortcomings of Quantitative Estimation Methodologies for LDA

17. Although significant progress has been made in the quantification of operational risk, ongoing supervisory review and several industry studies, such as the recent publication by the Basel Committee (2006a) on the Observed Range of Practice in Key Elements of Advanced Measurement Approaches (AMA), flag significant challenges in the way banks derive risk estimates under the provisions of the New Basel Capital Accord. Quantitative risk measurement almost always involves considerable parameter uncertainty from the application of estimation methods at high levels of statistical significance and poor data availability or few historical benchmarks to go by. And operational risk is no exception.

The effect of loss timing

18. All risk measures of extremes are inherently prone to yield unstable results, mainly because point estimates at high percentile levels hinge on only a small number of observations, far removed from the average projection. Therefore, a close examination of how the magnitude and the timing of losses qualifies the classification and selection of a few extremes is crucial to reliable quantitative analysis. Loss timing matters when the relation between average and maximum loss severity of operational risk events exhibits significant cyclical variation or erratic structural change over prolonged periods of time. If loss timing is treated indiscriminately, periodic shifts of EL coupled with changes of periodic loss frequency would encroach upon a consistent definition of what constitutes an extreme observation and cause estimation bias of UL.

19. Extreme outcomes from historical loss data can be selected either by absolute measure, if loss severity exceeds a certain time-invariant threshold value at any point in time, or by relative measure, if loss severity represents the maximum exposure within a certain time period. An absolute measure does not discriminate against changes in EL due to the time-varying economic impact of loss severity. In contrast, a relative measure recognizes that operational risk events whose absolute loss severity is less extreme by historical standards could in fact have a greater adverse impact on the performance of a bank than “larger extremes” when some exposure factor of operational risk, like reported gross income, temporarily falls below some long-run average. A relative selection of extremes identifies a certain number of periodic maxima in non-overlapping time intervals. These time intervals should be large enough to ensure that observed extremes are independent of each other but small enough so that transient extremes are not overwhelmed by a cluster of larger extremes or gradually declining EL.22

20. In addition, the consideration of relative extremes gains in importance when lower statistical power associated with risk estimates implies a greater potential of loss timing to affect the normative definition of what constitutes an extreme operational risk event—especially when the sample sizes of loss date are small. The quantitative criteria of AMA set forth risk estimates of UL to be calculated at the 99.9th percentile level, which obviously increases the chances of “smaller extremes” to escape an absolute measure of a time-invariant loss threshold. At the same time, depending on the sample size, very low statistical power leaves very few non-overlapping intervals of periodic maxima (see above) for the identification of relative extremes.

21. Overall, the concept of flexible measure of extremes over time also raises the general question of whether individual operational risk losses should be scaled in order to account for intermittent variations of relative economic significance of operational risk for different banks or types of banking activity. In this regard, losses could be expressed as relative amounts to some average exposure over a specified time period or scaled by some fundamental data.23 If maxima occurred with some degree of regularity and similar loss severity relative to EL and stable fundamental exposure, which empirical evidence negates, absolute selection criteria yield the most reliable designation of extremes as loss timing would not influence the decision of sufficient loss severity.

EVT and GHD—the most common approaches for LDA revisited

22. The high sensitivity of UL to higher order effects caused by the asymptotic tail convergence of the empirical loss distribution complicates risk estimation when the level of statistical confidence extends to areas outside the historical loss experience. Given the apparent shortcomings of conventional VaR to model fat-tailed distributions under LDA in compliance with the quantitative AMA standards, the development of internal risk measurement models has led to the industry consensus on the application of generalized parametric distributions, such as the g-and-h distribution (GHD) or various limit distributions (generalized Pareto distribution (GPD) and the generalized extreme value (GEV)) under extreme value theory (EVT).24 (see Appendix). Both EVT and GHD are appealing statistical concepts, because they deliver closed form solutions for “out-of-sample” estimates at very high confidence levels without imposing additional modeling restrictions if certain assumptions about the underlying loss data hold. They also specify residual risk through a generalized parametric estimation of order statistics—which makes them particularly useful to study the tail behavior of heavily skewed loss data.

23. EVT represents an effective method to specify the limit law of extreme operational risk losses at high percentile levels over a given time horizon when the lack of sufficient empirical loss data renders back-testing impossible and consigns the specification of higher moments to simple parametric methods. GPD of EVT approximates GEV close to the endpoint of the variable of interest, where only a few or no observations are available (Vandewalle et al., 2004).25 The popular Peak-over-Threshold (POT) estimation method for GPD prescribes upper tail convergence of a locally estimated probability function for exceedances beyond a selected threshold and re-parameterizes the first two raw moments to fit the entire empirical distribution (while the original tail index parameter is kept unchanged).26 In contrast, GHD represents an alternative parametric model to estimate the residual risk of extreme losses based on a strictly monotonically increasing transformation of a standard normal variable.27 The g-and-h family of distributions was first introduced by Turkey (1977) and can approximate probabilistically the shapes of a wide variety of different data and distributions (including GEV and GPD) by the choice of appropriate parameter values of skewness and kurtosis as constants or real valued (polynomial) functions (Martinez and Iglewicz, 1984).

24. The estimation of UL beyond verifiable historical prediction entails model risk that varies by the parameter sensitivity to the identification of extreme observations and the speed of asymptotic tail decay. Amid notorious scarcity of actual loss data of extreme operational risk events, however, analytical specifications of asymptotic tail behavior, however, serve only as a rough guide of potential model risk within a restricted empirical spectrum of available loss profiles. Despite the merits of assessing competing quantitative approaches under different estimation methods and percentile ranges, even the extensive Loss Data Collection Exercise (LDCE) data gathered by U.S. banking regulators prove insufficient to substantiate comparability of point estimates across different loss distributions at very high percentile levels.

25. In general, the specification of residual risk under EVT are prone to suffer from greater parameter uncertainty than GHD, whose higher moments are not directly affected by the classification of extremes (i.e., threshold choice) and possible contamination from the timing of losses (see above). The optimization of the threshold choice for GPD is contingent on the contemporaneous effect of estimation method and the desired level of statistical confidence. While point estimates at percentiles below a designated loss threshold are more reliable across different estimation methods and over different time horizons, they understate residual risk. Conversely, higher statistical confidence incurs higher parameter uncertainty by either (i) removing the desired percentile level of point estimates farther from a pre-specified loss threshold (which increases the chances of out-of-sample estimation) or (ii) raising the loss threshold to a higher quantile (which limits the number of “eligible” extremes for the estimation of the asymptotic tail shape). Thus, exceedance functions of conditional mean excess (such as GPD) under EVT warrant a more careful assessment of estimation risk from different loss profiles and estimation methods at variable levels of statistical confidence (Embrechts, 2000).

26. Recent studies indicate that EVT might not be the ultimate panacea of operational risk measurement from a comparative point of view. In their effort to derive a consistent measure of operational risk across several U.S. banks, Dutta and Perry (2006) find that GPD tends to overestimate UL in small samples, contending its adequacy as a general benchmark model.28 Their results concur with Mignola and Ugoccioni (2005), who also show that the rate of upper tail convergence to empirical quantiles can be poor, even for reasonably large samples.29

27. Nonetheless, in a recent simulation study of generic operational risk based on the aggregate statistics of operational risk exposure of U.S. banks, both GPD and GHD generate reliable and realistic AMA-compliant risk estimates of UL (Jobst, 2007a). Degen et al. (2006) also caution against the unreserved use of alternative modeling by means of GHD, whose calibration entails considerable parameter risk arising from the quantile-based estimation of higher moments. The quality of the fitted GHD hinges on the specification of the selected number of percentiles and their spacing in the upper tail (contingent on log2N of sample size N and the correlation between the order statistics of extreme observations and their corresponding quantile values). Although GHD (and its power law variant)30 outperforms both GEV and GPD in terms of goodness and consistency of upper tail fit at low average deviation of less than 25 percent, it underestimates actual losses in all but the most extreme quantiles of 99.95 percent and higher, when GPD estimates overstate excess elongation of asymptotic tail decay,31 suggesting a symbiotic relation between both methods contingent on the percentile level and the incidence of extreme events.

Operational risk as a dynamic process and the role of qualitative overlays

28. Considerable parameter uncertainty and estimation risk of quantitative models arises in situations when the historical loss profile is a poor predictor of future exposure. LDA is static and does not capture the incidence of extremes as a dynamic process. Fluctuations of operational risk over time might defy steady state approximation based on the central projection from historical exposure. Similar to project management, where the critical path changes in response to management action, the pattern of future losses—in particular extreme losses—might diverge from historical priors. Thus, the possibility of a dynamic transmission process of operational risk exposure curtails the validity of LDA (and related concepts) and necessitates a comparative assessment of the time-varying impact of different loss profiles under different measurement approaches. After all, EVT and GHD are only two of several concepts to measure operational risk.

29. The innate elusiveness of certain sources of operational risk imposes practical limitations on LDA measurability—even if operational risk exposure is examined at every level and in every nook and cranny of a bank. Since extreme losses result from one-off risk events that elude purely quantitative measurement models, qualitative self-assessment can help identify the possibility and the severity of extreme operational risk events in areas where empirical observations are hard to come by—but only in general ways. This disqualifies existing measurement approaches that ascertain the impact of operational risk events on banking activity based on historical reference without paying heed to the causality of operational risk events and the sensitivity of their financial impact across banks and over time. That said, subjective judgments, in turn, are prone to historic bias and rely on rough approximation for lack of precise estimates of probability and loss severity. The prominence of qualitative overlays, however, needs to be carefully balanced with a considerable degree of judgment and mindful interpretation of historical precedence.32

30. Clearly structural models based on macroeconomic factors and key risk indicators (KRIs), augmented by risk and control self-assessments (RCSA) would help inform a better forecast of future losses from operational risk and foster a more accurate allocation of regulatory capital. In predictive factor models, macroeconomic variables can help estimate different kinds of operational risk, such as internal and external fraud, which might be more likely at times of high unemployment or organizational restructuring. Nonetheless, exogenous shocks to banking activity, such as natural disasters, continue to escape quantification and might be best addressed by on-going monitoring of threats and qualitative assessments of the scale and scope of extreme scenarios associated with high-impact operational risk events.

B. Shortcomings of LDA Caused by Data Collection: ORM Systems and Data Characteristics

31. The comparative analysis of operational risk exposure reveals startling insights about the shortcomings of LDA in the presence of diverse loss data whose quantitative implications have thus far been frequently ignored by conventional measurement approaches and regulatory incentives. A wide range of available (quantitative and qualitative) measurement methods and different levels of sophistication of ORM induce heterogeneous risk estimates for similar exposures, which debilitate the reliable and consistent implementation of regulatory standards subject to a coherent supervisory review process.

32. While the current regulatory framework provides some degree of standardization of different banking activities and types of operational risk events, the efficacy of risk estimates still varies largely with the characteristics of internal loss data, which are influenced by (i) the diverse scale, scope, and complexity of different banking activities that escape uniform accountability, and (ii) idiosyncratic policies and procedures of ORM systems to authenticate, identify, monitor, report and control all aspects of operational risk (business environment and internal control factors (BEICFs)). In particular, different exposures associated with different sources of operational risk, the diversity of banks, which differ in size and sophistication of their activities (“exogenous variation”), and dissimilar policies and procedures to identify, process and monitor operational risk events as part of the ORM process33 (“endogenous variation”) as well as considerable diversity of loss data collection (subject to different loss thresholds and interpretations of what constitutes a material operational risk event) conspire to defy a consistent measure and obscure the comparability of cross-sectional risk estimation. These methodological difficulties are often magnified by (i) varying loss frequency and sample sizes of historical loss data, as well as (ii) data pooling as remedy of notorious data limitations (see next section below), which introduce further comparative bias in risk estimates. O’Dell (2005) reports that operational risk estimates submitted by U.S. banks as part of the LDCE in 2004 showed little convergence to common units of measure and requirements on the data collection due to different granularity of risk quantification.34 Recent efforts by the biggest U.S. financial institutions35 to seek simplified capital rules do not only underscore the importance of consistent regulatory standards, but also reveal that current implementation guidelines are still found wanting.

Data sources and pooling of internal and external loss data

33. The historical loss experience serves as a prime indicator of the amount of reserves banks need to hold to cover the financial impact of operational risk events. Since meaningful results from quantitative self-assessment of operational risk exposure—especially at very high levels of statistical confidence—require a large enough sample of observations under AMA soundness standards, certain BLs and/or ETs with insufficient empirical loss data might confine operational risk estimation to a certain set of “well-populated” units of measure. However, the paucity of actual loss data, the heterogeneous recording of operational risk events, and the intricate empirical characteristics of operational risk complicate consistent and reliable measurement. Even at more granular units of measure, most banks lack the critical mass of loss data to effectively analyze, calculate, and report capital charges for operational risk.

34. Banks require high-quality loss event information to enhance the predictive capabilities of their quantitative operational risk models in response to new regulatory guidelines under the New Basel Capital Accord. In order to address prevailing empirical constraints on a reliable measurement of operational risk exposure, several private sector initiatives of banks and other financial institutions have investigated the merits of data collection from internal and external sources (“consortium data” and external data of publicly reported events). Some of the most prominent examples of proprietary external data sources of operational risk loss events are the Global Operational Loss Database (GOLD) by the British Bankers’ Association (BBA), the Operational Risk Insurance Consortium (ORIC) by the Association of British Insurers (ABI), OpBase by Aon Corporation, and the operational risk database maintained by the Operational Riskdata Exchange Association (ORX). In several instances, financial services supervisors themselves have facilitated greater transparency about the historical loss experience of banks, such as the Loss Data Collection Exercise (LDCE) of U.S. commercial banks.

35. AMA criteria permit banks to use external data to supplement insufficient internal historical records, but the indiscriminate consolidation of loss data from different sources in proprietary databases or data consortia is stricken with difficulties. Although external loss data in self-assessment approaches help banks overcome the scarcity of internal loss data, the pooling of loss data entails potential pitfalls from survivorship bias, the commingling of different sources of risk, and mean convergence of aggregate historical loss. While internal data (if available) serve as a valid empirical basis for the quantification of individual bank exposure, the analysis of system-wide pooled data could deliver misleading results, mainly because it aggregates individual loss profiles into a composite loss exposure, which impedes risk estimates for very granular units of measure.

36. The natural heterogeneity of banking activity due to different organizational structures, types of activities, and risk management capabilities belies the efficacy of aggregation. As the historical loss experience is typically germane to one bank and might not be applicable to another bank, pooled data hides cross-sectional diversity of individual risk profiles and frequently obscure estimates of actual risk exposure. In particular, divergent definitions of operational risk and control mechanisms, variable collection methods of loss data, and inconsistent data availability for different BLs and/or ETs contingent on the scale and scope of individual banks’ main business activities are critical impediments to data pooling.

37. The use of pooled loss data without suitable adjustments of external data by key risk indicators and internal control factors is questionable and should be presented in a fashion that is statistically meaningful. Cross-sectional bias would only be mitigated if different internal control systems of various-sized banks were taken into account (Matz, 2005) or loss data exhibited some regularity across institutions so that a viable benchmark model can be developed (Dutta and Perry, 2006). Similar to the potential aggregation bias caused by data pooling, the blurred distinction of operational risk from other sources of risk (such as market and credit risk) hampers accurate empirical loss specification. Contingencies of data collection arise from the commingling of risk types in the process of loss identification, which might understate actual operational risk exposure.

Effects of loss frequency

38. Reliable quantitative risk analysis hinges on the comparability of loss profiles across different banking activities and the capacity of ORM systems to identify, report and monitor operational risk exposures in a consistent fashion. However, different ORM processes and diverse procedures of loss data collection and reporting affect the availability and the diversity of loss data. The heterogeneity of loss frequency within and across banks as well as over time is probably the single most important but often overlooked impediment to the dependable quantification of operational risk for comparative purposes. Variations of reported event frequency can indirectly affect the volatility of losses and the estimation of EL and UL.

The effect of loss frequency on expected loss (EL)

39. The recent clarification on The Treatment of Expected Losses by Banks Using the AMA Under the Basel II Framework (Basel Committee, 2005b) by the Operational Risk Subgroup (AIGOR) of the Basel Committee Accord Implementation Group acknowledges in particular the possibility of biased estimation of EL depending on the manner in which operational risk events are recorded over time. Loss frequency directly affects EL. A higher (lower) loss frequency decreases (increases) EL automatically in the trivial case of unchanged total exposure. The consideration of loss variation is essential to a non-trivial identification of distortions to EL caused by inconsistent loss frequency. A bank that reports a lower (higher) EL due to a higher (lower) incidence of operational risk events should not be treated the same as another bank whose operational risk losses exhibit higher (lower) variation at similar loss exposure (over the same time period). In this case, banks with more granular operational risk events would benefit from lower EL if loss volatility decreases disproportionately with each additional operational risk event. The same intuition applies to the more realistic case of different total exposure between banks. Higher (lower) loss frequency would decrease (increase) EL only if variation declines (rises) with higher (lower) total exposure—a contestable assumption at best. Hence, the adequate estimation of operational risk exposure entails a relative rather than an absolute concept of consistent frequency over one or multiple time periods.

40. Since the capital charge for operational risk under the new regulatory framework is based on the sum of operational risk estimates for different units of measure, inconsistent loss frequencies might substantially distort a true representation of EL within and across reporting banks. A “systemically inconsistent frequency measure” of operational risk for the same unit of measure (defined by either BL, ET or both) of different banks arises if lower (higher) EL and a higher (lower) total loss amount is associated with lower (higher) marginal loss volatility caused by a larger (smaller) number of observations. The same concept of inconsistent frequency also pertains to different units of measure within the same bank. The case of “idiosyncratically inconsistent frequency” is admittedly harder to argue, given the inherently heterogeneous nature of operational risk exposure of different banking activities. If the loss frequency for one BL or ET of a single bank changes considerably from one time period to another, it might also constitute a “time inconsistent frequency measure”, which amplifies idiosyncratic or systemically inconsistent loss frequency of two or more different BLs or ETs within a single bank or a single BL or ET across different banks respectively.

41. Regulatory guidance on operational risk measurement would need to ensure that risk estimates based on different empirical loss frequency preserve the marginal loss variation in support of a time consistent measurement of EL. A simple detection mechanism for possible estimation bias from idiosyncratically inconsistent loss frequency across two different units of measure in one and the same bank would compare the pairwise coefficient of variation cv = σ/μ and the mean μ (EL) of total operational risk exposure TL=EL+UL=N × μ (i.e., total losses) based on N number of losses recorded in two different BLs or ETs over time period τ. While

μBL1,τ>μBL2,τ|(cvBL1,τ>cvBL2,τ) if NBL1,τ<NBL2,τNBL1,τμBL1,τNBL2,τ,μBL2,τ,(1)

indicates “insufficient” observations for BL1 and relative to BL2,

μBL1,τ<μBL2,τ|(cvBL1,τ<cvBL2,τ) if NBL1,τ>NBL2,τNBL1,τμBL1,τNBL2,τ,μBL2,τ,(2)

reverses the situation, flagging excessively granular observations of losses in the first BL (BL1) relative to the second (BL2), as loss volatility decreases (increases) with a higher (lower) loss frequency and greater (smaller) TL. In (1) the unqualified treatment of loss frequency NBL1,τ would result in a disproportionately higher EL for BL1, whereas in (2) the bank could reduce EL in BL1 to a level below a fair projection of average losses. The four remaining permutations of loss variation and EL indicate “frequency consistent” reporting across the two BLs under consideration. Both loss distributions would result in a different capital charge under a consistent measure of loss frequency. Extending both equations above to all BLs (BL1-8) or ETs (ET1-7) defined by the Basel Committee (2006b, 2005a and 2004) to

μ{BLx,τ,ETx,τ}>m(μ{BL18,τ,ET17,τ})|(cv{BLx,τ,ETx,τ}>m(cv{BL18,τ,ET17,τ}))if N{BLx,τ,ETx,τ}<m(N{BL18,τ,ET17,τ})N{BLx,τ,ETx,τ}μ{BLx,τ,ETx,τ}m(N{BL18,τ,ET17,τ}μ{BL18,τ,ET17,τ})(3)

and

μ{BLx,τ,ETx,τ}<m(μ{BL18,τ,ET17,τ})|(cv{BLx,τ,ETx,τ}<m(cv{BL18,τ,ET17,τ}))if N{BLx,τ,ETx,τ}>m(N{BL18,τ,ET17,τ})N{BLx,τ,ETx,τ}μ{BLx,τ,ETx,τ}m(N{BL18,τ,ET17,τ}μ{BL18,τ,ET17,τ})(4)

identifies idiosyncratically inconsistent loss frequency of any individual BL (BLx) or ET (ETx) of the same bank based on the median (m) values of variation, mean and frequency of losses across all BLs or ETs. The same detection mechanism applies to cases of systemically inconsistent loss frequency for the same BL or ET across different banks, or time inconsistent loss frequency over multiple time periods.

The effect of loss frequency on unexpected loss (UL)

42. The reported frequency of operational risk events does not only influence EL but also the estimation of UL. Irrespective of the stochastic process of extremes, higher (lower) loss frequency attributes lower (higher) probability to extreme events at the margin and increases estimation risk of UL if loss frequency is inconsistent (see above), i.e., higher (lower) loss frequency coincides with lower (higher) loss variation. Given the high sensitivity of UL to changes in the probability of extreme events, an inconsistent frequency measure could interfere with the reliable estimation of UL from both a systemic and idiosyncratic point of view at one or multiple periods of time.

43. Banks could also employ loss frequency as a vehicle to diffuse the impact of extreme loss severity across different BLs and/or ETs (“organizational diversification”), if they define risk ownership and units of measure for risk estimates in a way that relegates the incidence of extreme events to even higher percentiles. Similar to the implicit truncation effect of a minimum loss threshold on the availability of loss data, loss fragmentation might arise if banks choose to either split losses between various BLs affected by the same operational risk event or spread operational risk losses among other sources of risk, such as market or credit risk.

44. Since the new capital rules prescribe the estimation of UL at a level of granularity that implies a loss frequency beyond actual data availability even for the largest banks, the best interest of banks lies in greater sample sizes, especially in cases of sparse internal loss data and less granular units of measure. Banks would naturally prefer higher (and inconsistent) loss frequency to substantiate regulatory capital for very predictable EL while reducing economic capital for UL. The larger the benefit of a marginal reduction of loss volatility from higher loss frequency, the greater the incentive of banks to arbitrage existing regulatory provisions and temper the probability of extreme events by means of higher reporting frequency and granularity of risk estimates.

45. The elusive nature of operational risk belies the general assumption of uniform frequency. While most operational risk losses comply with a static concept of loss frequency, different types of operational risk cause distinct stochastic properties of loss events, which also influence the relative incidence of extreme observations. One expedient solution to this problem is the aggregation of loss events in each unit of measure over a designated time period (weeks, months, quarters) in order to ensure the consistent econometric specification of operational risk exposure with different underlying loss frequency. Loss aggregation helps curb estimation bias from distinctive patterns of loss frequencies associated with different loss severity and units of measure of operational risk within banks. An aggregate loss measure inhibits incentives to suppress EL through many, very small losses and increases the relative incidence of extreme events without distorting the loss severity of UL. Two different series of observations with either high frequency and low average loss severity or low frequency and high average loss severity would both converge to the same aggregate expected operational risk exposure over an infinite time period (assuming the same total loss amount).

46. Loss aggregation also reveals time-varying loss frequency based on the relation between the mean and the median number of events in each time period of aggregation and the extent to which large fluctuations warrant adjustments to the assumption of constant loss frequency. Data limitations for robust risk estimation notwithstanding, the aggregation of operational risk losses delivers important insights about how measures of loss frequency are influenced by the type, loss severity, timing of operational risk events as well as the degree of granularity and specificity at which operational risk events are reported.

Regulatory Inconsistencies of the New Basel Capital Accord

The standardized and advanced measurement approaches of operational risk under the new regulatory framework of the New Basel Capital Accord contains several shortcomings concerning analytical rigor and consistent implementation of the following provisions: (i) capital adjustment of operational risk estimates, (ii) home-host recognition, (iii) and volume-based measures of operational risk (Jobst, 2007b).

Capital Adjustment of Operational Risk Estimates Under AMA

The current quantitative criteria of the AMA soundness standards allow banks to adjust the regulatory capital charge for UL by up to 20 percent of their operational risk exposure (“capital adjustment”) due to (i) diversification benefits from internally determined loss correlations36 between individual operational risk estimates (“units of measure”) and (ii) the risk mitigating impact of operational risk insurance. However, such capital adjustment is meaningful only if dependencies are measured consistently and reliably at the required level of statistical confidence and can be assessed without idiosyncratic bias caused the limited availability of loss data, heterogeneous loss reporting, and cross-sectional variation of the incidence and magnitude of extreme operational risk losses of the same BL or ET of different banks or across different BLs or ETs within the same bank—especially for a fair comparative assessment of adequate capital adjustment. Furthermore, diversification benefits negate the additive nature of operational risk and challenge the long-standing assumption of independent extremes. Even if the independence condition is relaxed, the estimation of joint asymptotic tail behavior of extreme marginals at high percentiles is not a straightforward exercise and requires a significant departure from conventional methods.

The traditional Pearson’s correlation coefficient detects only linear dependence between two variables whose fixed marginals are assumed to be distributed normally, indicating an empirical relation (or the lack thereof) based on more central (and more frequent) observations at lower (and not extreme) quantiles. An expedient non-parametric method of investigating the bivariate empirical relation between two i.i.d. random vectors is to ascertain the incidence of shared cases of cross-classified extremes via a refined quantile-based Chi-square statistic of independence (Coles et al., 1999 and Coles, 2001). This measure of joint asymptotic tail dependence of marginal extreme value distributions underlies several methods (Stephenson, 2002; Poon et al., 2003) to model multivariate extreme value distribution (EVD) functions.

However, the implications of these models have not been sufficiently tested with regard to their impact on quantitative assumptions and regulatory incentives underpinning the proposed capital rules for operational risk. The wide dispersion of the magnitude of capital adjustment of U.S. commercial banks in a recent survey (O’Dell, 2005) testifies to the significance of these considerations for sound regulatory standards.

Home-Host Recognition Under AMA

The concept of home-host recognition of operational risk estimates (Basel Committee, 2007 and 2004b) stipulates that banking organizations that calculate group-wide capital requirements under AMA for consolidated banking activities could use stand-alone AMA calculations for significant internationally active banking subsidiaries, whereas other subsidiaries are assigned a relative share of the group-wide AMA capital requirement. The flexibility of this “hybrid approach” extends the opportunity of regulatory arbitrage to significant internationally active subsidiaries that perform banking activities similar to the banking group but realize capital savings due to a favorable historical loss experience and/or a more flexible definition of units of measure.

Volume-Based Measures of Operational Risk in Standardized Approaches

If banks lack the necessary risk management tools or historical information for quantitative self-assessment under AMA, the New Basel Capital Accord stipulates standardized measurement approaches of operational risk, which the assume volume-based dependence of operational risk (see Table 2). However, relating operational risk exposure to business activity fails to acknowledge that banks specialize in a wide-ranging set of activities with a diverse nature of operational risk exposure and maintain very different measurement methods, risk-control procedures, and corporate governance standards, which affect both the incidence and the level of operational risk exposure. Moreover, the tacit risk-return tradeoff of volume dependence challenges empirical evidence of operational risk exposure. In the worst case, a highly profitable bank with ORM practices that are sound but not sophisticated enough to meet AMA soundness standards, would need to satisfy higher fixed capital charges than a less profitable peer bank with weaker controls. A fixed capital charge should not be defined as a function of income but as a measure of the debilitating effect of operational risk on income generation, which serves as a scaling factor of loss severity. Thus, an adequate volume-based measure would need to be constructed in a way that supports an inverse relation between gross income and the capital charge for operational risk and upholds a lower marginal rate of increase of operational risk exposure as banks generate more income.

Table 2.

Hypothetical Loss Exposure of the Largest U.S. Commercial Banks on September 11, 2001.

article image

Excluded is Fleet NA Bank (Fleetboston Financial Group), which merged with Bank of America NA in 2003.

“regulatory top holder” listed in parentheses.

Total assets, gross income and Tier 1 capital represent the sum of assets, gross income and core capital of individual banking institutions under each top holder, ignoring adjustments in consolidation at holding company level.

After the merger with Wachovia, First Union did not publish financial accounts for end-2001. Fundamental information is taken from the 2000 Financial Statement.

Source: Federal Reserve Board (2002), Federal Reserve Statistical Release - Large Commercial Banks (http://www.federalreserve.gov/releases/lbr/).

IV. CONCLUSION

47. Regulatory efforts have favored the development of quantitative models of operational risk as a distinct discipline that appeals to the usage of economic capital as determinant of risk-based regulatory standards. However, the one-off nature of extreme operational risk events without historical precedent defies purely quantitative approaches and necessitates a qualitative overlay in many instances. Reliable operational risk measurement is afflicted by considerable challenges to (i) the accurate estimation of asymptotic tail convergence of extreme losses, and (ii) the consistent definition and implementation of loss reporting across different areas of banking activity. We explained the shortcomings of existing LDA models and examined the structural and systemic effects of heterogeneous data reporting on loss characteristics, which influence the reliability of operational risk estimates for regulatory purposes.

48. We found that inherent parameter uncertainty of different risk models as well as cross-sectional variation of the timing and frequency of reported loss events can adversely affect the generation of consistent risk estimates. These results offer insights for enhanced market practice and a more effective implementation of capital rules and prudential standards for operational risk measurement.

49. Although standardized approaches under the New Basel Capital Accord recognize considerable variation of relative loss severity of operational risk events within and across banks, the economic logic of volume-based measures collapses in cases when banks incur small (large) operational risk losses in BLs where an aggregate volume-based measure would indicate high (low) operational risk exposure.

50. Aside from the diverse characteristics of different sources of operational risk, cross-sectional variation of loss profiles (due to variable loss frequency, different minimum thresholds for recognized operational risk losses, and loss fragmentation between various BLs37), and the idiosyncratic organization of risk ownership encumber the consistent application of AMA. Amid scarce historical loss data stringent regulatory standards amplify the considerable model risk of quantitative approaches and parameter instability at out-of-sample percentile levels unless available samples of loss data are sufficiently large. Normative assumptions, such as gross income as a volume-based metric, eschew such measurement bias of operational risk in favor of greater reliability—but they do so at the expense of less discriminatory power and higher capital charges.

51. The high percentile level of AMA appears to have been deliberately chosen to encourage better monitoring, measuring and managing of operational risk in return for capital savings vis-à-vis simple volume-based capital charges. Evidence from U.S. commercial banks suggests significant benefits from AMA over a standardized measure of 15 percent of gross income, which would grossly overstate the economic impact of even the most extreme operational risk events.38 The top 15 U.S. banks would have lost barely five percent of gross income on average if they had experienced an operational risk event comparable to the physical damage to assets suffered by the Bank of New York in the wake of the September 11 terrorist attacks (see Table 2).39 Similar analysis of LDCE data from 1999 to 2004 suggests that the worst performing bank in terms of operational risk exposure would have needed to provide capital coverage just shy of one percent of annual gross income to cover UL over a five-year time horizon (Jobst, 2007a).

52. The general purpose of safeguarding banking system stability is not to raise capital requirements to a point where they encumber financial activities but to encourage operations within boundaries of common regulatory standards and sound market practices, which mutualize risk and limit externalities from individual bank failures. In this regard, effective regulation endorses policies that mitigate the loss impact on the quality and stability of earnings from the profitable execution of business activities, while ensuring adequate capitalization in order to enhance strategic decision-making and reduce the probability of bankruptcy. In the context of operational risk, however, the concept of capital adequacy appears incidental to the importance of corporate governance as well as the perpetual re-assessment of risk models, which qualify income generation as a gauge of banking soundness. In general, the role of capital can be appreciated from the consideration of gambler’s ruin. Too little capital puts banks at risk, while too much capital prevents banks from achieving the required rate of return on capital. Although higher capital increases the general survival rate of banks, it does little to avoid bank failure unless it is paired with ex ante risk management and control procedures that limit the chances of fatal operational risk events whose loss severity would cease banking activities altogether regardless of (economically sustainable) capitalization. Thus, monitoring risk aversion towards activities with non-current exposure plays a critical role in guiding the effectiveness of marginal income generation, especially when financial innovation and new risk management processes upset the established relation between risk burden and safe capital levels in the determination of overall performance. Measures to reduce risk (and systemic vulnerabilities) are only effective if banks do not respond to a perceived new-found safety by engaging in activities that might engender complacency and carry even higher (but poorly understood types of) operational risk exposure.40

53. Although many banks will not be subject to a more rigorous regulatory regime of ORM under the New Basel Capital Accord, rating agencies uphold these risk management practices in their external assessment of credit quality and operational soundness of banks, and, thus, motivate indirect regulatory compliance via the economic incentive of lower capital costs.

54. Given the elusive nature of operational risk and the absence of risk-return trade-off (unlike market and credit risk), it is incumbent on banking supervisors to institute regulatory incentives that acknowledge the effect of diverse loss profiles and data collection methods on sound internal risk measurement methods. Amid increasing sophistication of financial products and the diversity of financial institutions, any capital rules would need to be cast in such a way that banks may determine the approach most appropriate for their exposures and, crucially, so as to allow for further methodological developments aimed at the expedient resolution of challenges arising from the intricate causality of operational risk. In particular, such efforts would be geared towards exploring options for (and greater flexibility in the administration of) measures that strike a balance between prescriptive and principle-based guidelines, which better reflects the economic reality of operational risk and preserves the accuracy, relevance, and comprehensiveness of regulatory provisions.

REFERENCES

  • Alexander, Carol (ed.), (2003), Operational Risk: Regulation, Analysis and Management. Financial Times, Prentice Hall, London.

  • Anonymous (2007), “Capital Standards: Proposed Interagency Supervisory Guidance for Banks That Would Operate Under Proposed New Basel II Framework,” US Fed News,(February 28).

    • Search Google Scholar
    • Export Citation
  • Balkema, August A., and Laurens de Haan, (1974), “Residual Life Time at Great Age,” Annals of Probability, Vol. 2, 792804.

  • Banerjee, Suman, and Kulwinder Banipal, (2005), “Managing Operational Risk: Framework for Financial Institutions,” Working Paper, A.B Freeman School of Business, Tulane University, (November).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2007), Principles for Home-host Supervisory Cooperation and Allocation Mechanisms in the Context of Advanced Measurement Approaches (AMA)—Consultative Document, Bank for International Settlements (February).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2006a), Observed Range of Practice in Key Elements of Advanced Measurement Approaches (AMA, BCBS Publications No. 131, Bank for International Settlements, (October), (http://www.bis.org/publ/bcbs131.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2006b), Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework—Comprehensive Versio, BCBS Publications No. 128, Bank for International Settlements, (June), (http://www.bis.org/publ/bcbs128.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision (2005a), Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework. BCBS Publications No. 118, Bank for International Settlements (November) (http://www.bis.org/publ/bcbs118.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2005b), The Treatment of Expected Losses by Banks Using the AMA Under the Basel II Framewor,.Basel Committee Newsletter No. 7, Bank for International Settlements, (November).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2004a), International Convergence of Capital Measurement and Capital Standards: A Revised Framewor, BCBS Publications No. 107, Bank for International Settlements, (June), (http://www.bis.org/publ/bcbs107.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2004b), Principles for the Home-Host Recognition of AMA Operational Risk Capita, BCBS Publications No. 106, Bank for International Settlements, (January), (http://www.bis.org/publ/bcbs106.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2003a), Operational Risk Transfer Across Financial Sectors, Joint Forum Paper, Bank for International Settlements, (August), (http://www.bis.org/publ/joint06.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2003b), Sound Practices for the Management and Supervision of Operational Risk, BCBS Publications No. 96, Bank for International Settlements, (February),(http://www.bis.org/publ/bcbs96.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2002), Sound Practices for the Management and Supervision of Operational Risk, BCBS Publications No. 91, Bank for International Settlements, (July), (http://www.bis.org/publ/bcbs91.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2001a), Sound Practices for the Management and Supervision of Operational Risk, BCBS Publications No. 86, Bank for International Settlements, (December), (http://www.bis.org/publ/bcbs86.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2001b), Working Paper on the Regulatory Treatment of Operational Risk, BCBS Publications No. 8, Bank for International Settlements, (September), (http://www.bis.org/publ/bcbs wp8.pdf).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2001c), Consultative Document - Operational Risk (Supporting Document to the New Basel Capital Accord), BCBS Publications (Consultative Document) No. 7, Bank for International Settlements, (January), (http://www.bis.org/publ/bcbsca07.pdf).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (2001d), Consultative Document—Operational Risk (Supporting Document to the New Basel Capital Accord), BCBS Publications (Consultative Document) No. 7, Bank for International Settlements, (January), (http://www.bis.org/publ/bcbsca07.pdf).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (1999), A New Capital Adequacy Framework, BCBS Publications No. 50, Bank for International Settlements, (June), (http://www.bis.org/publ/bcbs50.htm).

    • Search Google Scholar
    • Export Citation
  • Basel Committee on Banking Supervision, (1998), Operational Risk Management, BCBS Publications No. 42, Bank for International Settlements, (September), (http://www.bis.org/publ/bcbs42.htm).

    • Search Google Scholar
    • Export Citation
  • Castillo, Enrique, and Ali S. Hadi, (1997), “Fitting the Generalized Pareto Distribution to Data,” Journal of the American Statistical Association, Vol. 92(440), 160920.

    • Search Google Scholar
    • Export Citation
  • Coleman, Rodney, and Marcelo Cruz, (1999), “Operational Risk Measurement and Pricing,” Derivatives Week, Vol. 8, No. 30, (July 26) 56.

    • Search Google Scholar
    • Export Citation
  • Coles, Stuart G, (2001), An Introduction to Statistical Modelling in Extreme Values, Springer Verlag, London.

  • Coles, Stuart G., Heffernan, Janet, and Jonathan,Tawn A., (1999), “Dependence Measures for Extreme Value Analyses,” Extremes, Vol. 2, 33965.

    • Search Google Scholar
    • Export Citation
  • Crouhy, Michel, Galai, Dan, and Mark, Robert M, (2004), “Insuring versus Self-insuring Operational Risk: Viewpoints of Depositors and Shareholders,” Journal of Derivatives, (Winter), Vol. 12, No. 2, 515.

    • Search Google Scholar
    • Export Citation
  • Cruz, Marcelo, Coleman, Rodney, and Gerry,Salkin, (1998), “Modeling and Measuring Operational Risk,” Journal of Risk, Vol. 1, No. 1, 6372.

    • Search Google Scholar
    • Export Citation
  • Currie, Carolyn V., (2005), “A Test of the Strategic Effect of Basel II Operational Risk Requirements on Banks,” School of Finance and Economics, Working Paper No. 143, (September), University of Technology, Sydney.

    • Search Google Scholar
    • Export Citation
  • Currie, Carolyn V., (2004), “Basel II and Operational Risk—Overview of Key Concerns,” School of Finance and Economics, Working Paper No. 134, (March), University of Technology, Sydney.

    • Search Google Scholar
    • Export Citation
  • Degen, Matthias, Embrechts, Paul, and Dominik, Lambrigger D, (2006), “The Quantitative Modeling of Operational Risk: Between g-and-h and EVT,” Working paper, ETH Preprint, Zurich, (December 19).

    • Search Google Scholar
    • Export Citation
  • Dekkers, Arnold L. M., Einmahl, John H.J., and de Haan, Laurens, (1989), “A Moment Estimator for the Index of an Extreme-Value Distribution,” Annals of Statistics, Vol. 17, 183355.

    • Search Google Scholar
    • Export Citation
  • Drees, Holger, (1995), “Refined Pickands Estimators of the Extreme Value Index,” Annals of Statistics, Vol. 23, 205980.

  • Drees, Holger, de Haan, Laurens, and Sidney, Resnick, (1998), “How to Make a Hill Plot,” Discussion paper, Timbergen Institute, Erasmus University, Rotterdam.

    • Search Google Scholar
    • Export Citation
  • Dutta, Kabir K., and Jason, Perry, (2006), “A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital,” Working Paper No. 06–13, Federal Reserve Bank of Boston,(July).

    • Search Google Scholar
    • Export Citation
  • Embrechts, Paul, (2000), “Extreme Value Theory: Potential and Limitations as an Integrated Risk Management Tool,” Derivatives Use, Trading & Regulation, Vol. 6, 44956.

    • Search Google Scholar
    • Export Citation
  • Embrechts, Paul, Klüppelberg, Claudia, and Thomas, Mikosch, (1997), Modelling Extremal Events for Insurance and Finance, Springer-Verlag, Heidelberg.

    • Search Google Scholar
    • Export Citation
  • Falk, Michael, Hüsler, Jürg, and Rolf-Dieter Reiss, (1994), Laws of Small Numbers: Extremes and Rare Event, DMV-Seminar, Birkhäuser, Basel.

    • Search Google Scholar
    • Export Citation
  • Federal Reserve Board, (2006), Fourth Quantitative Impact Study 2006, Washington, D.C., (http://www.federalreserve.gov/boarddocs/bcreg/2006/20060224/).

    • Search Google Scholar
    • Export Citation
  • Federal Reserve Board, (2004), Federal Reserve Statistical Release - Aggregate Reserves of Depository Institutions and the Monetary Base, Washington, D.C., (http://www.federalreserve.gov/releases/h3/20050120).

    • Search Google Scholar
    • Export Citation
  • Fisher, Ronald, Aylmer, and Leonard, Henry Caleb Tippett, (1928), “Limiting Forms of the Frequency Distribution of the Largest or Smallest Member of a Sample,” Proceedings of the Cambridge Philosophical Society, Vol. 24, 18090.

    • Search Google Scholar
    • Export Citation
  • de Fontnouvelle, Patrick, (2005), “The 2004 Loss Data Collection Exercise,” presentation at the Implementing an AMA for Operational Risk conference of the Federal Reserve Bank of Boston, (May 19), (http://www.bos.frb.org/bankinfo/conevent/oprisk2005/defontnouvelle.pdf).

    • Search Google Scholar
    • Export Citation
  • de Fontnouvelle, Patrick, Rosengren, Eric S., and John S., Jordan, (2004), “Implications of Alternative Operational Risk Modeling Techniques,” SSRN Working Paper, (June), (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=556823).

    • Search Google Scholar
    • Export Citation
  • Grody, Allan D., Harmantzis Fotios C., and Gregory, Kaple J., (2005), “Operational Risk and Reference Data: Exploring Costs, Capital Requirements and Risk Mitigation,” Working paper, (November), Stevens Institute of Technology, Hoboken, New Jersey.

    • Search Google Scholar
    • Export Citation
  • Hill, Bruce M., (1975), “A Simple General Approach to Inference about the Tail of a Distribution,” Annals of Statistics, Vol. 35, 116374.

    • Search Google Scholar
    • Export Citation
  • Hoaglin, David C., (1985), “Summarizing Shape Numerically: the g-and-h Distributions,” in: Hoaglin, D.C., , Mosteller, F., and J. W. Tukey (eds.), Exploring Data Tables, Trend, and Shapes, John Wiley & Sons, Inc., New York, New York, 417513.

    • Search Google Scholar
    • Export Citation
  • Jenkinson, Arthur F., (1955), “The Frequency Distribution of the Annual Maximum (or Minimum) Values of Meteorological Elements,” Quarterly Journal of the Royal Meteorology Society, No. 87, 14558.

    • Search Google Scholar
    • Export Citation
  • Jobst, Andreas A., (2007a), “Constraints of Consistent Operational Risk Measurement: Data Collection and Loss Reporting,” Journal of Financial Regulation and Compliance,(forthcoming).

    • Search Google Scholar
    • Export Citation
  • Jobst, Andreas A., (2007b), “The Regulation of Operational Risk under the New Basel Capital Accord—Critical Issues,” International Journal of Banking Law and Regulation, Vol. 21, No. 5, 24973.

    • Search Google Scholar
    • Export Citation
  • Kotz, Samuel, and Saralees, Nadarajah, (2000), Extreme Value Distributions, Imperial College Press, London.

  • Larsen, Peter T., and Krishna, Guha, (2006), “US Banks Seek Looser Basel II Rules,” Financial Times, (3 August).

  • Leippold, Markus, and Paolo, Vanini, (2003), “The Quantification of Operational Risk,” SSRN Working Paper, (November).

  • Makarov, Mikhail, (2006), “Extreme Value Theory and High Quantile Convergence,” Journal of Operational Risk, Vol. 1, No. 2, 517.

  • Martinez, Jorge, and Boris Iglewicz, (1984), “Some Properties of the Tukey g and h Family of Distributions,” Communications in Statistics—Theory and Methods, Vol. 13, No. 3, 3569.

    • Search Google Scholar
    • Export Citation
  • Matz, Leonard, (2005), “Measuring Operational Risk: Are We Taxiing Down the Wrong Runways?Bank Accounting and Finance, Vol. 18, No. 2, 3–6 and 47.

    • Search Google Scholar
    • Export Citation
  • McCulloch, J. Huston, (1996), “Simple Consistent Estimators of Stable Distribution Parameters,” Communications in Statistics—Simulations, Vol. 15, 110936.

    • Search Google Scholar
    • Export Citation
  • McNeil, Alexander J., and Thomas, Saladin, (1997), “The Peak Over Thresholds Method for Estimating High Quantiles of Loss Distributions,”ETH Preprint, Zurich.

    • Search Google Scholar
    • Export Citation
  • Mignola, Gulio, and Roberto, Ugoccioni, (2006), “Sources of Uncertainty in Modeling Operational Risk Losses,” Journal of Operational Risk, Vol. 1, No. 2 (Summer), 3350.

    • Search Google Scholar
    • Export Citation
  • Mignola, Gulio, and Roberto, Ugoccioni, (2005), “Tests of Extreme Value Theory,” Operational Risk & Compliance, Vol. 6, Issue 10, (October), 325.

    • Search Google Scholar
    • Export Citation
  • Mittnick, Stefan, and Svetlozar, Rachev T., (1996), “Tail Estimation of the Stable Index,” Applied Mathematic Letters, Vol. 9, No. 3, 536.

    • Search Google Scholar
    • Export Citation
  • Moscadelli, Marco, (2004), “The Modelling of Operational Risk: Experience with the Data Collected by the Basel Committee,” in E. L Davis, (ed.), Operational Risk: Practical Approaches to Implementation, Risk Books, Incisive Media Ltd., London, 39105.

    • Search Google Scholar
    • Export Citation
  • Nešlehová, Johanna, Embrechts, Paul, and Valerie, Chavez-Demoulin, (2006), “Infinite Mean Models and the LDA for Operational Risk,” Journal of Operational Risk, Vol. 1, No. 1, 325.

    • Search Google Scholar
    • Export Citation
  • O’Dell, Mark, (2005), “Quantitative Impact Study 4: Preliminary Results—AMA Framework,” presentation at the Implementing an AMA for Operational Risk conference of the Federal Reserve Bank of Boston, (May 19), (http://www.bos.frb.org/bankinfo/conevent/oprisk2005/odell.pdf).

    • Search Google Scholar
    • Export Citation
  • Pickands, James, (1981), Multivariate Extreme Value Distributions, Imperial College Press, London.

  • Pickands, James, (1975), “Statistical Inference Using Extreme Order Statistics,” Annals of Statistics, Vol. 3, 11931.

  • Poon, Ser-Huang, Rockinger, Michael, and Jonathan, Tawn, (2003), “Extreme Value Dependence in Financial Markets: Diagnostics, Models, and Financial Implications,” Review of Financial Studies, Vol. 17, No. 2, 581610.

    • Search Google Scholar
    • Export Citation
  • Reiss, Rolf-Dieter, and Michael, Thomas, (1997). Statistical Analysis of Extreme Values, Birkhäuser, Basel.

  • Resnick, Sidney I., and Catalin Starica, (1997a), “Asymptotic Behavior of Hill’s Estimator for Autoregressive Data,” Stochastic Models, Vol. 13, 70323.

    • Search Google Scholar
    • Export Citation
  • Resnick, Sidney I, and Catalin, Starica, (1997b), “Smoothing the Hill Estimator,” Advances in Applied Probability, 29, 27193.

  • Rootzén, Holger, and Nader, Tajvidi, (1997), “Extreme Value Statistics and Wind Storm Losses: A Case Study,” Scandinavian Actuarial Journal, Vol. 1, 7094.

    • Search Google Scholar
    • Export Citation
  • Seivold, Alfred, Leifer, Scott, and Scott, Ulman, (2006), “Operational Risk Management: An Evolving Discipline,” Supervisory Insights, Federal Deposit Insurance Corporation (FDIC), (http://www.fdic.gov/regulations/examinations/supervisory/insights/sisum06/article01_risk.html).

    • Search Google Scholar
    • Export Citation
  • Stephenson, Alec G., (2002), “EVD: Extreme Value Distributions,” R News 2(2), 312, (available at (http://CRAN.R-prqiect.ortg.org/doc/Rnews/).

    • Search Google Scholar
    • Export Citation
  • The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of Thrift Supervision (OTS), (2007), Proposed Supervisory Guidance for Internal Ratings-based Systems for Credit Risk, Advanced Measurement Approaches for Operational Risk, and the Supervisory Review Process (Pillar 2) Related to Basel II Implementation - Federal Register Extract, Federal Information & News Dispatch, Inc., (28 February), (http://www.fdic.gov/regulations/laws/publiccomments/basel/oprisk.pdf).

    • Search Google Scholar
    • Export Citation
  • The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB) the Federal Deposit Insurance Corporation (FDIC), and the Office of Thrift Supervision (OTS)), (2006b), Risk-Based Capital Guidelines: Internal Ratings-Based Capital Requirement, Notice of Proposed Rulemaking, (25 September).

    • Search Google Scholar
    • Export Citation
  • The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS),), (2006a), Risk-Based Capital Guidelines: Internal Ratings-Based Capital Requirement, Advanced Notice of Proposed Rulemaking, (4 August), (http://www.federalreserve.gov/BoardDocs/Press/bcreg/2006/20060206/attachment.pdf, http://www.fdic.gov/regulations/laws/publiccomments/basel/anprriskbasedcap.pdf and).

    • Search Google Scholar
    • Export Citation
  • The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS), (2005), Results of the 2004 Loss Data Collection Exercise for Operational Risk, (May).

    • Search Google Scholar
    • Export Citation
  • The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS), (2003), Operational Risk Advanced Measurement Approaches for Regulatory Capital. Joint Supervisory Guidance (2 July).

    • Search Google Scholar
    • Export Citation
  • Tukey, John W., (1977), Exploratory Data Analysis. Addison-Wesley, Reading, Vandewalle, Bjorn, Beirlant, Jan, and Mia Hubert, (2004), “A Robust Estimator of the Tail Index Based on an Exponential Regression Model,” in: Hubert, Pison, M, G., Struyf, A. and S. Van Aelst (eds.), Theory and Applications of Recent Robust Methods (Series: Statistics for Industry and Technology), Vol. 10, Birkhäuser, Basel, 36776, (http://www.wis.kuleuven.ac.be/stat/Papers/tailindexICORS2003.pdf).

    • Search Google Scholar
    • Export Citation
  • Zamorski,Michael J., (2003), “Joint Supervisory Guidance on Operational Risk Advanced Measurement Approaches for Regulatory Capital—Board Memorandum,”Federal Deposit Insurance Corporation (FDIC), Division of Supervision and Consumer Protection, (July), (http://www.fdic.gov/regulations/laws/publiccomments/basel/boardmem-oprisk.pdf).

    • Search Google Scholar
    • Export Citation

FORMULAE

Definition and Parametric Specification of the Generalized ParetoDdistribution (GPD)

55. The parametric estimation of extreme value type tail behavior under GPD requires a threshold selection that guarantees asymptotic convergence to GEV. GPD approximates GEV according to the Pickands-Balkema-de Haan limit theorem only if the sample mean excess is positive linear and satisfies the Fisher-Tippett (1928) theorem. It is commonly specified as conditional mean excess distribution F[t](x) = Pr (X −tx|X > t)Pr (yy|x > t) of an ordered sequence of exceedance values Y = max (X1,…, Xn) from i.i.d. random variables, which measures the residual risk beyond threshold t ≥ 0 (Reiss and Thomas, 1997). GPD with threshold t → ∞ represents the (only) continuous approximation of GEV (Castillo and Hadi, 1997)

F[ant+bn](an(t+s)+bn)Wξ,β[t](x)=1+logGξ(s/(1+ξt)),(A.1)

where x > t0 and F[t](x)=1/ni=1nI{Xi>t}k/n under the assumption of stationarity and ergodicity (Falk et al., 1994), so that

Wξ,β[t](x)={1(1+ξx/β)1/ξξ01exp(x/β)ξ=0(A.2)

unifies the exponential (GP0), Pareto (GP1) and beta (GP2) distributions, with shape parameter ξ = 0 defined by continuity (Jenkinson, 1955). The support of x is x ≥ 0 when ξ ≥ 0 and 0 ≤ x≤− β/ξ when ξ < 0.

56. It is commonplace to use the so-called Peak-over-Threshold (POT) method (Embrechts et al., 1997; McNeil and Saladin, 1997; Kotz and Nadarajah, 2000) for the GPD fit to the order statistic of fat-tailed empirical data. POT estimates the asymptotic tail behavior of nth order statistics xn−k+1:n,…,xn: n of extreme values as i.i.d. random variables beyond threshold value t ≥ 0, whose parametric specification Wξ,μ,σ[t](x)=Wξ,t,σ+ξ(tμ)(x) is extrapolated to a region of interest for which no (i.e., out-of-sample) or only a few observations (i.e., in-sample) are available.

57. The threshold choice of POT involves a delicate trade-off between model accuracy and estimation bias contingent on the absolute order of magnitude of extremes. The threshold quantile must be sufficiently high to support the parametric estimation of residual risk while leaving a sufficient number of external observations to maintain linear mean excess without inducing parameter uncertainty. Although a low threshold would allow a greater number of exceedances to inform a more robust parameter estimation of asymptotic tail behavior, the declaration of more extremes implies a higher chance of dependent extremes in violation of the convergence property of GPD as limit distribution under GEV. By the same token, an excessively restrictive threshold choice might leave too few maxima for a reliable parametric fit without increasing estimation risk.

58. Alternatively, a suitable threshold can also be selected by the timing of occurrence of extremes. In order to identify extremes through a measure of relative magnitude based on a time-varying threshold, we divide the original loss data series into equivalently sized, non-overlapping blocks and select the maximum value from each block in order to obtain a series of maxima consistent with the assumption of i.i.d. extreme observations. Block sizes need to be chosen so as to mitigate bias caused by clustered extremes during times of high volatility.41

59. The locally estimated GPD function Wξ˜,t,σ˜(x)=k/n for exceedance values k=i=1nI(xi>t) over the selected threshold t = xn-k+1:n is then fitted to the entire empirical distribution Wξ˜,μ˜,σ˜(t)=(nk)/n over sample size n by selecting location and scale parameters μ^ and σ^ such that

Wξ^,μ^,σ^[t](x)=Wξ˜,μ˜,σ˜(x).(A.3)

60. By keeping the shape parameter ξ^=ξ˜ constant, the first two moments are reparameterized to σ^=σ˜(k/n)ξ˜and μ^=t(σ^σ˜)/μ˜.Therefore, the estimated GPD quantile function is

x^p=t+σ^/ξ^((n/k(1p))ξ^1)Wξ,β[t]1(x).(A.4)

61. We qualify the suitability of a certain type of GPD estimation method on its ability to align sample mean excess values to the analytical mean excess values of GPD as a binding convergence criterion of extreme observations to asymptotic tail behavior based on a given threshold choice. We distinguish between four major estimation types: (i) the moment estimator (Dekkers et al., 1989), (ii) the maximum likelihood estimator, (iii) the Pickands (Pickands, 1975 and 1981) estimator, (iv) the Drees-Pickands (Drees, 1995) estimator, and the Hill (Hill, 1975; Drees et al., 1998) estimator.

62. The bivariate extreme value distribution of G (x) can be expressed as

G(x1,x2)=exp((y1+y2)A(y1/(y1+y2))),(A.5)

where the jth univariate marginal distribution

yj=yj(xj)=(1+ξj(xμj)/σj)+1/ξ(for j=1,2)(A.6)

constitutes GEV with b+ = max (b,0), scale parameter σ > 0, location parameter μ and shape parameter ξ. The dependence function A (.) characterizes the dependence structure ofG(x1, x2). It is a convex function on [0,1] with A(0) = A(l) = 1 and max(ω,1 − ω) ≤ A(ω) ≤ 1 for all 0 ≤ ω ≤ 1 0≤ ω ≤1. Parametric models are commonly used for inference of multivariate extreme value distributions, of which the logistic model with distribution function G(x1,x2;α)=exp((y11/α+y21/α)α) with dependence parameter α ∈ (0,1] appears to be the most widely used. While α = 1 indicates complete independence, two or more extreme value distributions reach complete dependence as a approaches zero.

Definition and parametric specification of the g-and-h distribution (GHD)

63. In line with Dutta and Perry (2006) as well as Degen et al. (2006), we also examine the g-and-h distribution as alternative generalized parametric model to estimate the residual risk of extreme losses. The g-and-h family of distributions was first introduced by Turkey (1977) and represents a strictly increasing transformation of a standard normal variable z according to

Fg,b(z)=μ+σ(exp(gz)1)×exp(bz2/2)g1,(A.7)

where μ, σ, g, h ≥ 0 are the location, scale, skewness and kurtosis parameters of distribution Fg,h (z), whose domain of attraction includes all real numbers. The parameters g and h can either be constants or real valued (polynomial) functions of z2 (as long as the transformational structure (exp(gz)-1)exp(bz2/2)g−1 is a monotonic function almost surely).42μ = 0, then Fg,b (z) = −Fg,b(-z), which implies that a change in the sign of g only changes the direction of the skewness but not its magnitude. When b = 0, GHD reduces to Fg,0 (z) =μ + σ (exp (gz)−1) g−1 (“g-distribution”), which exhibits skewness but lacks the slow converging, asymptotic tail decay of extreme quantiles of F0,b (z) = μ + σz exp (bz2/2) (“h-distribution”) for g = 0.

64. Martinez and Iglewicz (1984) show that GHD can approximate probabilistically the shapes of a wide variety of different data and distributions (including GEV and GPD) by choosing the appropriate parameter values. Its basic structure is predicated on order statistics, which makes it particularly useful to study the tail behavior of heavily skewed loss data.43 Since this distribution is merely a transformation of the standard normal distribution, it also provides a useful probability function for the generation of random numbers through Monte Carlo simulation. Given the transformational structure of the standard normal distribution, the quantile-based method (McCulloch, 1996; Hoaglin, 1985) is typically used for parametric estimation of GHD and can deliver more accurate empirical tail fit than conventional estimation methods, such as the method of moments and maximum likelihood estimation (MLE). Dutta and Babbel (2002) provide a detailed description of the estimation method.44

Definition and Parametric Specification of the Generalized ParetoDdistribution (GPD)

1

This definition includes legal risk from the failure to comply with laws as well as prudent ethical standards and contractual obligations, but excludes strategic and reputational risk.

2

Besides operational risk measurement, the promotion of consistent capital adequacy requirements for credit and market risk as well as new regulatory provisions for asset securitization were further key elements of the reforms, which began in 1999. Although the revision of the old capital rules was originally set for completion in 2000, protracted negotiations and strong criticism by the banking industry of a first regulatory framework published in May 2004 delayed the release of the new guidelines for the International Convergence of Capital Measurement and Capital Standards (New Basel Capital Accord or short “Basel II”) until June 2006, with an implementation expected in over 100 countries by early 2007.

3

National supervisory authorities have substantial discretion (“supervisory review”) in determining the scope of implementation of Basel II framework. For instance, the Advanced Notice on Proposed Rulemaking (ANPR) on Risk-Based Capital Guidelines: Internal Ratings-Based Capital Requirement (2006a) by U.S. regulators requires only large, internationally active banking organizations with total assets of US$250 million or more and total on-balance sheet foreign exposures of US$10 billion or more to adopt the Basel II guidelines on capital rules.

4

A unit of measure represents the level at which a bank’s operational risk quantification system generates a separate distribution of potential operational risk losses (Seivold et al., 2006). A unit of measure could be on aggregate (i.e., enterprise-wide) or defined as either a BL, a ET category, or both. The Basel Committee specifies eight BLs and seven ETs for operational risk reporting in the working paper on Sound Practices for the Management and Supervision of Operational Risk (2003a). According to the Operational Risk Subgroup (AIGOR) of the Basel Committee Accord Implementation Group the eight BLs are: (i) corporate finance, (ii) trading and sales, (iii) retail banking, (iv) payment and settlement, (vi) agency services, (vi) commercial banking, (vii) asset management, and (viii) retail brokerage. The seven ETs are: (i) internal fraud, (ii) external fraud, (iii) employment practices and workplace safety, (iv) clients, products and business practices, (v) damage to physical assets, (vi) business disruption and system failure, and (vii) execution, delivery and process management. This categorization was instrumental in bringing about greater uniformity in data classification across financial institutions.

5

At national supervisory discretion, a bank can be permitted to apply the Alternative Standardized Approach (ASA) if it provides an improved basis for the calculation of minimum capital requirements by, for instance, avoiding double counting of risks (Basel Committee, 2004a and 2005a).

6

The three-year average of a fixed percentage of gross income (BIA) or the summation of prescribed capital charges for various BLs (TSA) exclude periods in which gross income is negative from the calculation of risk-weighted assets (RWAs), whose periodic aggregate determines the required capitalization of a bank, i.e., the risk-based capital (RBC).

7

The quantitative criteria of AMA also offer the possibility of capital adjustment due to diversification benefits from the correlation between extreme internal operational risk losses and the risk mitigating impact of insurance.

8

Many banks typically model economic capital at a confidence level between 99.96 and 99.98 percent, which implies an expected default rate comparable to “AA”-rated credit exposures.

9

The AMA-based capital charge covers total operational risk exposure unless EL is already offset by eligible reserves under the Generally Accepted Accounting Principles (GAAP) (“EL breakout” or “EL offset”), such as capital-like substitutes, or some other conceptually sound method to control for losses that arise from normal operating circumstances.

10

U.S. federal bank regulators also specify five years of internal operational risk loss data and permit the use of external data for the calculation of regulatory capital for operational risk in their advanced notice on proposed rulemaking on Risk-Based Capital Guidelines (2006b). In contrast, the Basel Committee (2006b, 2005a and 2004a) requires only three years of data after initial adoption of AMA and then five years. Moreover, for U.S.-supervised financial institutions, AMA is the only permitted quantification approach for operational risk according to the Joint Supervisory Guidance on Operational Risk Advanced Measurement Approaches for Regulatory Capital (2003) and the Advanced Notice on Proposed Rulemaking (ANPR) on Risk-Based Capital Guidelines: Internal Ratings-Based Capital Requirement (2006a).

11

VaR defines an extreme quantile as maximum limit on potential losses that are unlikely to be exceeded over a given time horizon (or holding period) at a certain probability.

12

The introduction of a volume-based capital charge coincided with an alternative volume based charge developed in the EU Regulatory Capital Directive.

13

Concerns about this trade-off also entered into an inter-sectoral debate about the management and regulation of operational risk. In August 2003, the Joint Forum of banking, securities, and insurance supervisors of the Basel Committee issued a paper on Operational Risk Transfer Across Financial Sectors (2003a), which compared approaches to operational risk management and capital regulation across the three sectors in order to gain a better understanding of current industry practices. In November 2001, a Joint Forum working group made up of supervisors from all three sectors had already produced a report on Risk Management Practices and Regulatory Capital: Cross-Sectoral Comparison (2001b) on the same issues.

14

The Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of Thrift Supervision (OTS) issued the NPR and the Joint Supervisory Guidance on an interagency basis.

15

The NPR was published as the Proposed Supervisory Guidance for Internal Ratings-based Systems for Credit Risk, Advanced Measurement Approaches for Operational Risk, and the Supervisory Review Process (Pillar 2) Related to Basel II Implementation (2007), which stipulated that the implementation of the new regulatory regime in the U.S. would require some and permit other qualifying banks to calculate their risk-based capital requirements using the internal ratings-based approach (IRB) for credit risk and AMA for operational risk (together, the “advanced approaches”). The guidance provided additional details on the advanced approaches and the supervisory review process to help banks satisfy the qualification requirements of the NPR. The proposed AMA guidance identifies supervisory standards for an acceptable internal measurement framework, while the guidance on the supervisory review process addresses three fundamental objectives: (i) the comprehensive supervisory assessment of capital adequacy, (ii) the compliance with regulatory capital requirements, and the implementation of an internal capital adequacy assessment process (ICAAP) (Anonymous, 2007).

16

The stand-alone AMA capital requirements may include a well-reasoned estimate of diversification benefits of the subsidiary’s own operations, but may not consider group-wide diversification benefits.

17

Pursuant to this provision, the stand-alone AMA calculation of significant subsidiaries could rely on data and parameters calculated by the parent banking group on a group-wide basis, provided that those variables were adjusted as necessary to be consistent with the subsidiary’s operations.

18

The Federal Financial Institutions Examination Council (FFIEC) was established on March 10, 1979, pursuant to title X of the Financial Institutions Regulatory and Interest Rate Control Act of 1978 (FIRA), Public Law 95-630. The FFIEC is a formal interagency body empowered to prescribe uniform principles, standards, and report forms for the federal examination of financial institutions by the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS) and to make recommendations to promote uniformity in the supervision of financial institutions.

19

More information about the 2004 LDCE can be found at http://www.ffiec.gov/ldce/ (FFIEC). After conclusion of the LDCE, U.S. bank regulators published the Results of the 2004 Loss Data Collection Exercise for Operational Risk (2005). Some findings have also been published at www.bos.frb.org/bankinfo/conevent/oprisk2005/defontnouvelle.pdf and www.bos.frb.org/bankinfo/qau/pd051205.pdf by the Federal Reserve Bank of Boston. See also de Fontnouvelle (2005).

20

The LDCE asked participating banks to provide all internal loss data underlying their QIS-4 estimates (instead of one year’s worth of data only) (Federal Reserve Board, 2006). A total of 23 U.S. commercial banks participated in the LDCE. Banking organizations were asked to report information about the amount of individual operational losses as well as certain descriptive information (e.g., date, internal business line (BL), event type (ET), and amount of any recoveries) regarding each loss that occurred on or before June 30, 2004 or September 30, 2004. Banks were also requested to define own mappings from internally-defined BLs and ETs—as units of measure—to the categorization under the New Basel Capital Accord for reporting purposes (instead of standardized BLs and ETs).

21

This compilation included the International Convergence of Capital Measurement and Capital Standards (Basel Committee, 2004a and 2005a), the elements of the 1988 Accord that were not revised during the Basel II process, and the November 2005 (Basel Committee, 2005b) paper on Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework.

22

Alternatively, extremes could be selected from constant time intervals defined by rolling windows with daily, weekly, monthly updating (Coleman and Cruz, 1999; Cruz at al., 1998) in order to mitigate problems associated with a time-invariant qualification of extreme observations.

23

Note that our advocacy of a volume-based adjustment in this case is limited to the use of scaling for the purpose of threshold selection only. In general, the loss severity of operational risk events is independent of business volume, unless banks differ vastly in terms of balance sheet value

24

GEV and GPD are the most prominent methods under EVT to assess parametric models for the statistical estimation of the limiting behavior of extreme observations. While GEV identifies the asymptotic tail behavior of the order statistics of i.i.d. normalized extremes, GPD is an exceedance function that measures the residual risk of these extremes (as conditional distribution of mean excess) beyond a predefined threshold for regions of interest, where only a few or no observations are available. GPD approximates GEV if linear mean excess converges to a reliable, non-degenerate limiting distribution that satisfies the external types (Fisher-Tippett) theorem of GEV. See Vandewalle et al. (2004), Stephenson (2002), and Coles et al. (1999) for additional information on the definition of EVT.

25

The Pickands-Balkema-de Haan limit theorem (Balkema and de Haan, 1974; Pickands, 1975) postulates that GPD is the only non-degenerate limit law of observations in excess of a sufficiently high threshold, whose distribution satisfies the extremal types theorem of GEV.

26

An optimal threshold value for GPD would support a stable parametric approximation of GEV and linear mean excess of extremes, while allowing in-sample point estimation within a maximum range of percentiles.

27

Since this distribution is merely a transformation of the standard normal distribution, GHD is also a useful probability function for the generation of random numbers in the course of Monte Carlo simulation.

28

They also report that all operational risk loss data reported by U.S. financial institutions in the course of QIS-4 (see Box 2) conformed to GHD to a very high degree.

29

Compared to Mignola and Ugoccioni (2006), the model specification in Jobst (2007a) also generates markedly lower estimation error and closer upper tail convergence of GDP, partly because higher moments of the loss generating function approximate the average empirical loss profile of U.S. banks and maximum loss severity is left unbounded (rather than being calibrated to the standardized regulatory capital charge for operational risk).

30

In this approach, high quantiles of the aggregate loss distribution can be calculated analytically by a scaled multiplication of the largest order statistic, provided that the largest observations follow a power law.

31

GPD risk estimates in Jobst (2007a) imply capital savings of up to almost 97 percent compared to an uniform measure of operational risk exposure and do not corroborate the 16.79 percent capital-to-gross income ratio in Dutta and Perry (2006), whose high GPD risk estimates could have resulted from their choice of the Hill algorithm as estimation method. Mittnick and Rachev (1996) found that the Hill estimation algorithm yields highly unstable estimates for samples with less than 500,000 observations and returns inaccurate results of asymptotic tail behavior for distributions for loss data with a non-zero left endpoint due to loss reporting thresholds.

32

Empirical evidence suggests that high operational risk losses occur especially when managers are grossly overconfident about the existing governance and control mechanisms (Matz, 2005).

33

Banks have developed quite different methods for determining operational risk capital with varied emphasis given to the categorization of BLs and ETs as defined by the Basel Committee.

34

The Operational Risk Subgroup of the Basel Committee Accord Implementation Group also found that the measurement of operational risk is limited by the quality of internal loss data, which tends to be based on short-term periods and includes very few, if any, high severity losses (which can dominate the bank’s historical loss experience).

35

In August of 2006 representatives of Bank of America, J.P. Morgan Chase, Wachovia and Washington Mutual aimed to convince the Federal Reserve Board that they should be allowed to adopt a simplified version of the New Basel Capital Accord (Larsen and Guha, 2006), mainly because additional restrictions have raised the attendant cost of implementing more sophisticated risk measurement systems for such advanced models to a point where potential regulatory capital savings are virtually offset.

36

In general, risk measures for different operational risk estimates (by BL and/or ET) must be added for purposes of calculating the regulatory minimum capital requirement. However, banks may be permitted to use internally determined correlations in operational risk losses across individual operational risk estimates, provided it can demonstrate to the satisfaction of the national supervisor that its systems for determining correlations different units of measure are sound, implemented with integrity, and take into account the uncertainty surrounding any such correlation estimates (particularly in periods of stress). The bank must validate its correlation assumptions using appropriate quantitative and qualitative techniques.

37

These concerns are not valid for the representation of aggregate operational risk losses (without further classification by BL and/or ET) in BIA.

38

In this simplified trade-off, we do not consider any other elements of operational risk regulation, such as capital adjustment, home-host recognition and volume-based measures, which impeded the consistent calculation of regulatory capital in certain situations.

39

Large differences between capital charges under AMA and capital charge under standardized approaches hint at prima facie inconsistency of the current regulatory framework and calls for a lower fixed capital multiplier of standardized approaches consistent with AMA risk estimates at the required level of statistical confidence.

40

Such a development of lower individual risk aversion is currently under way in credit risk transfer markets, where financial innovation has made banks more cavalier about riskier lending as derivatives continue to play a benign role to spread credit risk throughout the financial system. As long as markets remain stable and prove robust, more reliance is placed on the resilience of the financial system while the mechanism of moral hazard intensifies potential systemic vulnerabilities to credit risk across institutions and national boundaries as credit risk transfer induces less risk averse behavior.

41

Resnick and Stăriča (1997a and 1997b) propose the standardization of extreme observations to temper possible bias and inherent constraints of discrete threshold selection. For instance, time-weighted adjustments of loss frequency and the normalization of loss amounts by some fundamental data as scaling factors could be possible approaches to redress a biased threshold selection contingent on sample composition.

42

The region where the transformation function of z is not monotonic would be assigned a zero probability measure.

43

All operational risk loss data reported by U.S. financial institutions in the wake of QIS-4 conformed to GHD to a very high degree (Dutta and Perry, 2006).

44

Martinez and Iglewicz (1984) and Hoaglin (1985), and most recently Degen at al. (2006), provide derivations of many other important properties of this distribution.

Consistent Quantitative Operational Risk Measurement and Regulation: Challenges of Model Specification, Data Collection and Loss Reporting
Author: Andreas Jobst