Review and Implementation of Credit Risk Models of the Financial Sector Assessment Program (FSAP)

The paper presents the basic Credit Risk+ model, and proposes some modifications. This model could be useful in the stress-testing financial sector assessments process as a benchmark for credit risk evaluations. First, we present the setting and basic definitions common to all the model specifications used in this paper. Then, we proceed from the simplest model based on Bernoulli-distributed default events and known default probabilities to the fully-fledged Credit Risk+ implementation. The latter is based on the Poisson approximation and uncertain default probabilities determined by mutually independent risk factors. As an extension we present a Credit Risk+ specification with correlated risk factors as in Giese (2003). Finally, we illustrate the characteristics and the results obtained from the different models using a specific portfolio of obligors.

Abstract

The paper presents the basic Credit Risk+ model, and proposes some modifications. This model could be useful in the stress-testing financial sector assessments process as a benchmark for credit risk evaluations. First, we present the setting and basic definitions common to all the model specifications used in this paper. Then, we proceed from the simplest model based on Bernoulli-distributed default events and known default probabilities to the fully-fledged Credit Risk+ implementation. The latter is based on the Poisson approximation and uncertain default probabilities determined by mutually independent risk factors. As an extension we present a Credit Risk+ specification with correlated risk factors as in Giese (2003). Finally, we illustrate the characteristics and the results obtained from the different models using a specific portfolio of obligors.

I. Introduction

The changes in the supervisory framework, as put forward in the Basel II capital accord, which try to bridge the gap between regulatory capital and economic capital, are requiring regulators and supervisors to communicate with market participants using new language and new tools. In fact, the development of the internal rating system (IRB), as envisaged in the new regulations, and the more systematic collection of detailed data on exposures and recovery rates are expected to allow more and more financial institutions to evaluate their risk profile and to manage it based on these concepts.

Over the last ten years, we have witnessed major advances in the field of modeling credit risk. There are now three main approaches to quantitative credit risk modeling: the “Merton-style” approach, the purely empirical econometric approach, and the actuarial approach.1 Each of these approaches has, in turn, produced several models that are widely used by financial institutions around the world.

All these models share a common purpose: determining the probability distribution of the losses on a portfolio of loans and other debt instruments. Being able to compute the loss distribution of a portfolio is critical, because it allows the determination of the credit Value at Risk (VaR) and, therefore, the economic capital required by credit operations.

In this paper we present the theoretical background that underpins one of the most frequently used models for loss distribution determination: Credit Risk+. This model, originally developed by Credit Suisse Financial Products (CSFP), is based on the actuarial approach, and has quickly become one of the financial industry’s benchmarks in the field of credit risk modeling. Its popularity spilled over into the regulatory and supervisory community, prompting some supervisory authorities to start using it in their monitoring activities.2 There are several reasons why this model has become so popular:

  • It requires a limited amount of input data and assumptions;

  • It uses as basic input the same data as required by the Basel II IRB approach;

  • It provides an analytical solution for determining the loss distribution; and

  • It brings out one the most important credit risk drivers: concentration.

We illustrate our implementation of the model and suggest that it could be used as a toolbox in the different monitoring activities of the IMF. We also analyze the problems that may arise by directly applying Credit Risk+, in its original formulation, to certain portfolio compositions. Consequently, we propose some solutions.

The paper is organized as follows: initially, we present the setting and basic definitions common to all the model specifications used in this paper. Then, we gradually proceed from the simplest model based on Bernoulli-distributed default events and default probabilities known with certainty to the fully-fledged version of Credit Risk+. The latter is based on the Poisson approximation and uncertain default probabilities determined by mutually independent risk factors. We then go beyond Credit Risk+ by presenting a specification that allows for correlation among risk factors as in Giese (2003). We also apply the implemented models to a specific portfolio of exposures to illustrate their characteristics and discuss in detail the results. Finally, we present some conclusions.

II. The Basic Model Setting

In this section, we present the setting and basic definitions common to all the model specifications used in the paper. We consider a portfolio of N obligors indexed by n = 1,…,N. Obligor n constitutes an exposure En. The probability of default of obligor n over the period considered is pn.

A. Default Events

The default of obligor n can be represented by a Bernoulli random variable Dn such that the probability of default over a given period of time is P(Dn =1) = pn, while the probability of survival over the same span is P (Dn = 0) = 1 − pn. Then, the probability generating function (PGF) of Dn is given by:3

GD,n(z)=x=0P(Dn=x)zx.

Since Dn can only take two values (0 and 1), GD,n (z) can be rewritten as follows:

GD,n(z)=(1pn)z0+pnz1=(1pn)+pnz.

B. Losses

The loss on obligor n can be represented by the random variable Ln = Dn ⋅ En. The probability distribution of Ln is given by P(Ln = En) = pn and P(Ln = 0) = 1 − pn. The total loss on the portfolio is represented by the random variable L:

L=n=1NLn=n=1NDnEn.

The objective is to determine the probability distribution of L under various sets of assumptions. Knowing the probability distribution of L will allow us to compute the VaR and other risk measures for the portfolio.

C. Normalized Exposures

When implementing the model it has become common practice to normalize and round the individual exposures and then group them in exposure bands. The process of normalization and rounding limits the number of possible values for L, and hence reduces the time required to compute the probability distribution of L. When the normalization factor is small relative to the total portfolio exposure, the rounding error is negligible. Let F be the normalization factor.4 The rounded normalized exposure of obligor n is denoted vn and is defined by

vn=ceil(En/F).(1)

D. Normalized Losses

The normalized loss on obligor n is denoted by λn and is defined by

λn=Dnvn,

where Dn is the default and vn is the normalized exposure for obligor n. Hence, λn is a random variable that takes value vn with probability pn, and value 0 with probability 1−pn. The total normalized loss on the portfolio is represented by the random variable λ:

λ=n=1Nλn.

Finding the probability distribution of λ is equivalent to finding the probability distribution of L.

E. Exposure Bands

The use of exposure bands has become a common technique used in the literature to simplify the computational process. Once the individual exposures have been normalized and rounded, as shown in equation (1), common exposure bands can be defined in the following manner: the total number of exposure bands, J, is given by the highest value of the normalized individual exposures, J=max{vn}n=1N. Let j represent the index for the exposure bands. Then, the common exposure in band j is vj =j. The distribution of obligors among exposure bands is done such that each obligor n is assigned to band j if vn =vj =j. Then, the expected number of defaults in band j, μj, is given by

μj=n,vn=jpn.

Consequently, the total expected number of defaults in the portfolio, μ, is given by

μ=j=1Jμj.

III. Model 1: A Simple Model with Non-Random Default Probabilities

In this section we present a simple model with Bernoulli-distributed default events and non-random default probabilities. The advantage of this model is that it relies on the smallest set of assumptions. As a result, the loss distribution in this model can be efficiently computed as a simple convolution, without making any approximation for the distribution of default events. This approach is particularly appropriate when default probabilities are high, and when there is little uncertainty concerning the values of these probabilities.

The key assumptions of the model are that individual default probabilities over the period considered are known with certainty, and that default events are independent among obligors. The objective is to determine the probability distribution of the total normalized loss, λ or, equivalently, the PGF of λ. Given the assumption of independence among obligors, the PGF of the total normalized loss, λ, can easily be computed from the PGF of the individual normalized loss, λn. Since λn can only take the values 0 and vn, the PGF of λn is given by

Gλ,n(z)=P(λ=0)z0+P(λ=vn)zvn=(1pn)+pnzvn.

Taking into account that λ is the sum of λn over n and since default events are independent among obligors, the PGF of λ is simply given by

Gλ(z)=n=1NGλ,n(z)=n=1N[(1pn)+pnzvn] .

This product of the individual loss PGF is a simple convolution that can be computed using the Fast Fourier Transform (FFT), from the Convolution Theorem:

Gλ(z)=IFFT(n=1N{FFT[Gλ,n(z)]})Gλ,B(z) ,

where IFFT is the Inverse Fast Fourier Transform. This algorithm can be efficiently implemented as long as the portfolio does not contain more than a few thousand obligors.5 This is about as far as the model can be analytically developed under the assumption that the probability distributions of default events are Bernoulli distributions. In order to find a closed-form solution for the PGF of λ, when default events are represented by Bernoulli random variables, it is also necessary to assume that default events are independent among obligors, and that default probabilities are known with certainty (non-random). If, instead, probability of default is assumed to be random, computing the loss distribution requires the use of Monte Carlo simulation.6

Finding an analytical solution when probabilities are random and default events are no longer independent implies using an approximation for the distribution of the default events. This is precisely the path taken by the Credit Risk+ model when deriving closed-form expressions for the PGF of λ.

IV. Introducing the Poisson Approximation

In this section we describe in detail one of the essential assumptions of the Credit Risk+ model: the individual probabilities of default are assumed to be sufficiently small for the compound Bernoulli distribution of the default events to be approximated by a Poisson distribution. This assumption makes it possible to obtain an analytical solution even when the default probabilities are not known with certainty.

Under the assumption that default events follow Bernoulli distribution, the PGF of obligor’s n default is

GD,n(z)=(1pn)+pnz=1+pn(z1) .

Equivalently:

GD,n(z)=exp[ln(GD,n(z))]=exp[ln(1+pn(z1))].(2)

If we assume that pn is very small, then pn (z − 1) is also very small under the assumption that |z| ≤ 1. Defining w = pn (z − 1), we can perform a Taylor expansion of ln(1+w) in the vicinity of w = 0 :

ln(1+w)=ww22+w33  

Neglecting the terms of order two and above and going back to the original notation yields

ln[1+pn(z1)]pn(z1)(3)

The assumption that justifies neglecting the terms of order two and above is that pn is “small”: the smaller pn, the smaller the (absolute) difference between ln[1 + pn (z − 1)] and pn(z − 1).

Going back to equation (2) and using the approximation from equation (3), we have:

GD,n(z)exp[pn(z1)].

Performing again a Taylor expansion of exp [pn (z 1)] in the vicinity of pn = 0 finally yields

GD,n(z)exp [pn(z1)]=exp (pn)x=0pnxx!zx.(4)

The last member of equation (4) is the PGF of a Poisson distribution with intensity pn. Therefore, for small values of pn, the Bernoulli distribution of Dn can be approximated by a Poisson distribution with intensity pn.

Using the Poisson approximation, the probability distribution of Dn is then given by

P(Dn=x)=exp(pn)pnxx!.

A new expression for the PGF of the individual normalized loss λn can also be derived using the Poisson approximation (4). The PGF of λn is defined by

Gλ,n(z)=E(zλn)=E(zDnvn),(5)

where E is the expectation operator. Since vn is not random, equation (5) can be written as:

Gλ,n(z)=x=0P(Dn=x)zDnvn=x=0exp(pn)pnxx!(zvn)x.(6)

Since Dn takes only two values (0 and 1), we can express the PGF of λn as:

Gλ,n(z)=(1+pnzvn)exp(pn).(7)

The second term of the right side in equation (7) is the first order Taylor expansion of exp (pnzvn); that is, exp (pnzvn)1+pnzvn. Therefore, for small values of probabilities pn, the PGF of the individual normalized loss, λn, can be approximated by:

Gλ,n(z)=exp[pn(zvn1)].(8)

V. Model 2: The Model with Known Probabilities Revisited

In this section, we apply the Poisson approximation to the basic model with known default probabilities, as presented in Section III. Hence, we derive a new expression for the PGF of the total normalized loss on the portfolio, Gλ(z), based on the expression for Gλ,n(z) from equation (8). Individual default probabilities are still assumed to be known with certainty (non-random), and default events are still assumed to be independent among obligors.

Since individual losses are mutually independent, Gλ(z) is simply the product of the individual loss PGF, as in Section III:

Gλ(z)=n=1NGλ,n(z).(9)

Replacing Gλ,n(z) in equation (9) with the expression from equation (8), we have:

Gλ(z)=n=1Nexp[pn(zvn1)]=exp[n=1Npn(zvn1)]

Using the definition of the exposure bands, as presented in Section II, subsection E, Gλ(z) can be written as follows:

Gλ(z)=exp[j=1Jn,vn=vjpn(zvj1)] .

This expression can be finally simplified using the definitions of the expected number of defaults in band j, μj, and the expected total number of defaults in the portfolio, μ:

Gλ(z)=exp[j=1Jμj(zvj1)]=exp[μ(j=1Jμjμzvj1)](10) Gλ,FIXED(z)

Equation (10) defines the probability distribution of the total loss on the portfolio when default events are mutually independent, default probabilities are known with certainty, and Poisson distributions are used to approximate the distributions of individual default events.

VI. Model 3: The Model with Random Default Probabilities

In this section we present the full-fledged version of Credit Risk+. We do so by removing the assumptions used so far, that is, that the default probabilities are known with certainty, and that default events are unconditionally mutually independent.

Instead, we assume that individual default probabilities are random, and are influenced by a common set of Gamma-distributed systematic risk factors. Consequently, the default events are assumed to be mutually independent only conditional on the realizations of the risk factors. Under these assumptions, the use of the Poisson approximation still allows us to obtain an analytical solution for the loss distribution.

The alternative, using the Bernoulli defaults, would be to compute the loss distribution by Monte Carlo simulation. We provide an example of this procedure in Section XI, and report the results in Figure 4.

Let us assume that there are K risk factors indexed by k. Each factor k is associated with a “sector” (an industry or a geographic zone, for example) and is represented by a random variable γk. The probability distribution of γk is assumed to be a Gamma distribution with shape parameter αk=1/σk2 and scale parameter βk=σk2. Therefore, the mean and the variance are given by

E(γk)=αkβk=1Var(γk)=αkβk2=σk2

The moment generating function (MGF) of γk is the function defined by:

Mγ,k(z)=E[exp(γkz)]=(1βkz)αk=(1σk2z)1/σk2.

The random variables γk are assumed to be mutually independent. In addition, the default probability of obligor n is assumed to be given by the following model:

pn=p¯n(k=1Kγkωk,n),(11)

where p¯n is the average default probability of obligor n, and ωk,n is the share of obligor n ’s exposure in sector k (or the share of obligor n ’s debt that is exposed to factor k). According to this model, the default probability of obligor n is a random variable with mean p¯n.

It is important to note that exposure to common risk factors introduces unconditional correlation between individual default events. Consequently, default events are no longer unconditionally mutually independent.7 However, individual default events—and therefore individual losses—are assumed to be mutually independent conditional on the set of factors γ = (γ1,…,γk,…,γK).

As in Sections IV and V, Dn is assumed to follow a Poisson distribution with intensity pn, and the PGF of the individual normalized loss λn, is assumed to be

Gλ,n(z)=exp[pn(zvn1)].

Using the multi-factor model for pn defined in equation (11), Gλ,n (z) can be written as follows:

Gλ,n(z)=exp[p¯n(k=1Kγkωk,n)(zvn1)]=k=1Kexp[p¯nγkωk,n(zvn1)]

Let Gx (z|γ) be the PGF of the total normalized loss, conditional on γ. Since individual losses are mutually independent conditional on γ, Gx (z|γ) is simply the product of the individual loss PGF conditional on γ:

Gλ(z|γ)=n=1NGλ,n(z|γ)=n=1Nk=1Kexp[p¯nγkωk,n(zvn1)]=exp[n=1Nk=1Kp¯nγkωk,n(zvn1)]

If we define Pk(z) as:

Pk(z)=k=1Np¯nωk,n(zvn1),(12)

then Gx (z|γ) can be written as

Gλ(z|γ)=exp[k=1KγkPk(z)] .

Let Eγ denote the expectation operator under the probability distribution of γ, and let Eγ,k denote the expectation operator under the probability distribution of γk. The unconditional PGF of the total normalized loss, denoted by Gλ(z), is the expectation of Gλ(z|γ)under γ’s probability distribution:

Gλ(z) =Eγ[Gλ(z|γ)]=Eγ{exp[k=1KγkPk(z)]}

Define P(z) = [P1(z),…, Pk(z),…,PK(z)]. Using the definition of the joint MGF of γ, Mγ, we can write Gλ(z) as follows:

Gλ(z)=Eγ{exp[P(z)γ]}=Mγ[P(z)] .(13)

It is important to note that this equation does not rely on the fact that the γk s are mutually independent.

Let us consider now a vector ζ = (ζ1,…, ζk,…,ζK) of auxiliary variables such that 0 ≤ ξk < 1 for all k = 1,…,K. The joint MGF of the vector γ is given by:

Mγ(ζ)=Eγ[exp(ζγ)]=Eγ[exp(k=1Kζkγk)]=Eγ[k=1Kexp(ζkγk)].(14)

Recall that the MGF of γk is defined by:

Mγ,k(ζk)=Eγ,k[exp(γkζk)]=(1σk2ζk)1/σk2.

Since variables γk are mutually independent, Mγ (ζ) can be rewritten as follows:

Mγ(ζ)=k=1KEγ,k[exp(ζkγk)]=k=1KMγ,k(ζk)

Therefore, setting ζk =Pk (z), Gλ (z) is given by

Gλ(z) =k=1KMγ,k[Pk(z)]=k=1K[1σk2Pk(z)]1/σk2(15)

Equation (15) can be further transformed as follows:

Gλ(z)=exp[ln(Gλ(z))]=exp{k=1K1σk2ln[1σk2Pk(z)]}Gλ,CR+(z).

Hence, we obtained the PGF of the total normalized loss on the portfolio for the Credit Risk+ model, Gλ,CR+ (z). It is worth noting that Gλ,CR+ (z) was obtained under the following assumptions: default probabilities are random; the factors that determine default probabilities are Gamma-distributed and mutually independent; and default events are mutually independent conditional on these factors.

VII. The Latent Factors Assumption

In Credit Risk+ the factors γk are treated as latent variables. That is to say that the factors that influence the default probabilities are assumed to be unobservable. It is further assumed that the means and standard deviations of the default probabilities, as well as their sensitivities to the latent factors, are known or can be estimated without using observations of the factors γk.

The latent factors approach is required by the linear multi-factor model for the default probabilities (equation (11)). This model has the advantage of enabling the derivation of an analytical solution for the loss distribution. However, unlike a probit or logit model, this model has little empirical relevance, because it allows default probabilities to exceed 1 (albeit with very low probability). If equation (11) were to be estimated using empirical observations of the factors γk, it is likely that the default probabilities implied by the econometric model would often fall outside the interval [0,1]. To avoid having to deal with unreasonable values for the default probabilities, instead of estimating equation (11) using observations of the factors γk, these factors are treated as unobserved latent variables.

In Credit Risk+, the means of the latent factors can be normalized to one without loss of generality. As a result, the latent factors only play a role via their standard deviations after normalization (σk, k = 1,…,K), which can be estimated from the means, standard deviations and sensitivities of the default probabilities.

Therefore, when Credit Risk+ is implemented in practice, the inputs of the model are: the mean individual default probabilities, denoted by p¯n, n = 1,…,N; the standard deviations of the individual default probabilities, denoted by σn, n = 1,…,N; and the matrix of weights {ωk,n} for k = 1…K, n = 1…N, representing the exposure of each obligor n to each factor k. Then, the σk are estimated from the σn.

Let us assume that some of the obligors in the portfolio are exposed to a single factor. For example, consider an obligor n who is only exposed to factor k :

ωk,n=1,k=κωk,n=0,kκ

The default probability of this obligor is given by

pn=p¯nγκ.

Therefore,

σn=p¯nσκ.

Note that this equation implies that the ratio σn/p¯n is the same for all obligors such that ωk,n= 1. Nevertheless, since this relationship holds for any obligor n such that ωκn=1,σκ can be computed from σn as follows

σκ=1Nκη,ωκη=1σnp¯n,

where Nk is the number of obligors such that ωk,n = 1. In the case where all obligors are exposed to more than one factor, the original Credit Risk+ implementation uses weighted averages of σn to estimate σk:

σk=1μkn=1Nωk,nσn=1μkn=1Nωk,np¯nσnp¯nwith μk=n=1Nωk,np¯n(16)

This expression is actually not consistent with the linear multi-factor model for individual default probabilities used in Credit Risk+, and it results in an underestimation of σk. However, in practice, it gives reasonably accurate results, as it will be shown in Section XI.

An alternative is to use least squares to estimate σk from σn. According to the multi-factor model for the default probabilities, σk2 s are solutions of the following linear system:

σn2=p¯n2(k=1Kσk2ωk,n),n=1,,N.

If N ≥ K, then a solution to this system can be found using constrained least squares. However, there is no way to guarantee that σk2, k = 1…K will all be strictly positive. The least squares method and the weighted average method are compared in Section XI.

VIII. Model 4: Extension of Credit Risk+ with Correlated Factors

One of the limitations of Credit Risk+ is the assumption that the factors determining the default probabilities are mutually independent. In this section, following the approavh developed by Giese (2003), we allow for some form of correlation among factors.

In Credit Risk+, the k -th factor γk follows a Gamma distribution with non-random shape and scale parameters (αk and βk, respectively). In other words, Credit Risk+ assumes that the means and variances of the γk s are non-random. Giese (2003) introduces an additional factor, which will be denoted by Γ, that affects the distributions of all the γk s, thereby introducing some correlation between the factors. Γ is assumed to follow a Gamma distribution with shape parameter α= 1/σ2 and scale parameter β = σ2. Consequently, the mean and variance are given by

E(Γ)=αβ=1Var(Γ)=αβ2=σ2

Now variables γk are assumed to be mutually independent conditional on Γ.

The probability distribution of γk is still a Gamma distribution with shape parameter αk and scale parameter βk. However, while βk is still constant, αk is now a function of Γ, and hence a random variable,

αk=Γα¯k,

where α¯k is a constant.

Let EΓ denote the expectation under Γ’s distribution, and let Eγ,k denote the expectation under γk ‘s distribution conditional on Γ. Similarly, let VarΓ denote the variance under Γ ’s distribution, and let Varγ,k denote the variance under γk ’s distribution conditional on Γ.

The expectation of γk conditional on Γ is given by:

Eγ,k(γk|Γ)=αkβk=Γα¯kβk.

The unconditional expectation of γk is given by:

E(γk)=EΓ[Eγ,k(γk|Γ)]=EΓ(αkβk)=EΓ(Γα¯kβk).

Since α¯kand βk are not random, Ek) can be rewritten as follows:

E(γk)=α¯kβkEΓ(Γ)=α¯kβk.

The unconditional expectation of γk is assumed, without loss of generality, to be equal to 1, so that βk=1/α¯k. The variance of γk conditional on Γ is given by

σk2=Varγ,k(γk|Γ)=αkβk2=Γα¯kβk2=Γ/α¯k.

The unconditional variance of γk is given by

Var(γk)=EΓ[Varγ,k(γk|Γ)]+VarΓ[Eγ,k(γk|Γ)]=EΓ(1α¯kΓ)+VarΓ(Γα¯kβk)=βkEΓ(Γ)+α¯kβkVarΓ(Γ)=βk+σ2

The unconditional covariance between any two factors γk and γ, is given by

Cov(γk,γl)=E(γkγl)E(γk)E(γl)=EΓ[Eγ,k,l(γkγl|Γ)]E(γk)E(γl)

Since γk and γl are independent conditional on T, this expression becomes:

Cov(γk,γl)=EΓ[Eγ,k(γk|Γ)Eγ,k(γk|Γ)]E(γk)E(γl)=α¯kβkα¯lβl[E(Γ2)1]=α¯kβkα¯lβl[Var(Γ)+E(Γ)21]

Since α¯kβk=α¯lβl=1 and E(Γ)2 = 1, the unconditional covariance between γk and γl is simply given by

Cov(γk,γl)=σ2>0.

Consider a vector Z = (z1,…,zk,…,zK) of auxiliary variables such that 0 ≤ zk < 1 for k = 1,…,K. The joint MGF of γ is the function Mγ defined by

Mγ(z)=EΓ[exp(k=1Kγkzk)] .

Using the properties of conditional expectations, Mγ (Z) can be rewritten as follows:

Mγ(Z)=EΓ{Eγ[exp(k=1Kγkzk)|Γ]},

where Eγ is the expectation under γ’s joint distribution.

Since the γk s are mutually independent conditional on Γ, Mγ (Z) can be further rewritten as follows:

Mγ(Z)=EΓ[k=1KEγ,k[exp(γkzk)|Γ]]=EΓ[k=1KMγ,k(zk|Γ)]

where Mγ.k(⋅|Γ) is the MGF of γk conditional on Γ:

Mγ,k(z|Γ)=Eγ,k[exp(γkz)|Γ]=(1βkz)αk=(1βkz)Γα¯k.

MΓ (Z) can now be rewritten as follows:

MΓ(Z)=EΓ{exp(log[k=1KMγ,k(zk|Γ)])}=EΓ{exp[k=1Klog[Mγ,k(zk|Γ)])}=EΓ{exp[k=1KΓα¯kln(1βkzk)]}=EΓ{exp[Γk=1K1βkln(1βkzk)]}

Define a new auxiliary variable t :

t=k=1K1βkln(1βkzk).

Using t, Mγ (Z) can be rewritten as the MGF of Γ, denoted by MΓ:

Mγ(Z)=EΓ[exp(Γt)]=MΓ(t)=1σ2ln(1σ2t).

Replacing t with its expression gives us the final expression for Mγ (Z):

Mγ(Z)=exp{1σ2ln[1+σ2k=1K1βkln(1βkzk)]}.

Recall equation (13) established in Section VI:

Gλ(z)=Mγ[P(z)]

This equation is still valid in the context of this section and, when Mγ is replaced with its new expression, it gives us the PGF of the total normalized loss in the model with correlated sectors

Gλ(z)=exp{1σ2ln[1+σ2k=1K1βkln(1βkPk(z))]}Gλ,CORR(z),

with:

Pk(z)=n=1Np¯nωkn(zvn1).

IX. Model Summary

In this section we provide a summary presentation for the loss distributions of the various models described so far. The following equations show the four expressions for the PGF of the normalized loss on the portfolio. Each expression corresponds to the PGF of a particular model.

Model 1: Gλ,B(z)=IFFT(n=1N{FFT[Gλ,n(z)]}).

Model 2: Gλ,FIXED(z)=exp[μ(j=1Jμjμzvj1)].

Model 3: Gλ,CR+(z)=exp{k=1K1σk2ln[1σk2Pk(z)]}.

Model 4: Gλ,CORR(z)=exp{1σ2ln[1+σ2k=1K1βkln(1βkPk(z))]}.

  • Gλ,B (z) is the PGF of the normalized loss on the portfolio in the “basic” model with Bernoulli defaults events and non-random default probabilities;

  • Gλ,fixed (z) is the PGF of the normalized loss on the portfolio in the model with fixed default probabilities as approximated by a Poisson distribution;

  • Gλ,CR+(z) is the PGF of the normalized loss on the portfolio in the model with random probabilities and mutually independent factors—this is the Credit Risk+ model;

  • Gλ,corr (z) is the PGF of the normalized loss on the portfolio in the model with random probabilities and correlated factors—this is an extension of Credit Risk+ model.

The basic model with Bernoulli defaults events and non-random default probabilities is the only one that does not rely on the Poisson approximation. Both Gλ,B (z) and Gλ,FIXED(z) were derived under the assumption that default probabilities are non-random. The Poisson approximation is the only source of discrepancies between the resulting loss distributions in these two models. Therefore, the accuracy of that approximation can be evaluated by comparing Gλ,FIXED(z) and Gλ,B(z).

X. Numerical Implementation

In this section we discuss numerical issues that arise in the implementation of the models described above, and then we present two algorithms that address these problems.

As explained in Section III, the PGF of the portfolio loss in the basic model with Bernoulli default events and fixed probabilities is a simple convolution of the individual loss PGFs. The models based on the Poisson approximation (including Credit Risk+) are implemented using more sophisticated algorithms.

The original algorithm proposed by CSFP (1997) to implement the Credit Risk+ model is based on a recursive formula known as the Panjer recursion.8 In the context of credit risk models, the usefulness of the Panjer recursion is limited by two numerical issues. Both issues arise from the fact that a computer does not have infinite precision.

First, the Panjer recursion cannot deal with arbitrarily large numbers of obligors: as the expected number of defaults in the portfolio increases, the computation of the first term of the recursion becomes increasingly imprecise; and above a certain value for the expected number of defaults in the portfolio, the value found for the first term of the recursion becomes meaningless.9

Second, the Panjer recursion is numerically unstable in the sense that numerical errors accumulate as more terms in the recursion are computed. This can result in significant errors in the upper-tail of the loss distribution, and hence, in the computation of the portfolio’s VaR.

This section briefly presents two alternative algorithms that can be used to implement Credit Risk+ and the other models based on the Poisson approximation.10

The first algorithm is an alternative recursive scheme introduced by Giese (2003). Haaf, Reis and Schoenmakers (2003) have shown that this algorithm is numerically stable, in the sense that precision errors are not propagated and amplified by the recursive formulas. This algorithm can provide a unified implementation of all versions of the model. However, like the Panjer recursion, this algorithm can fail for very large numbers of obligors.

The second algorithm is due to Melchiori (2004). It is based on the Fast Fourier Transform (FFT), and it can deal with very large numbers of obligors. It can easily be applied to the model with non-random default probabilities, and to the model with random probabilities and uncorrelated factors.

A. Alternative Recursive Scheme

Recall the definition of the PGF of the total normalized loss on the portfolio:

Gλ(z)=x=0P(λ=x)zx.

Gλ (z) is a polynomial function of the auxiliary variable z. The coefficients of this polynomial are the probabilities associated to all the possible values for λ.

One way to implement a particular version of the model is to derive a polynomial representation for the corresponding version of Gλ (z), and to compute the coefficients of that polynomial.

Note that all three versions of Gλ (z) derived using the Poisson approximation involve exponential and/or logarithmic transformations of polynomials in z.11 The first step in the computation of Gλ,FIXED(z) is the computation of the coefficients of Gl (z). Similarly, the first step in the computation of Gλ,CR+ (z) and Gλ,CORR (z) is the computation of the coefficients of Pk (z), for k = 1,…,K. Once these coefficients have been determined, exponential and logarithmic transformations of Gl (z) and/or Pk (z) must be computed.

Exponential and logarithmic transformations of polynomials can be computed using recursive formulas derived from the power series representations of the exponential and logarithm functions.

Consider two polynomials of degree xmax in z, Q(z)and R(z):

Q(z)=x=0xmaxqxzxR(z)=x=0xmaxrxzx

If R(z) = exp[Q(z)], the coefficients of R(z) can be computed using the following recursive formula:

r0=exp(q0)rm=s1msmqsrms

If R(z) =ln[Q(z)], the coefficients of R(z) can be computed using the following recursive formula:

r0=ln(q0)rm=1q0[qms=1m1smqsrms]

These recursive formulas can be used to compute the coefficients of Gλ,FIXED(z), Gλ,CR+ (z) and Gλ,CORR (z).

This algorithm produces accurate results for portfolios such that:

n=1Nωknp¯n<μkmax750k=1,,K.(17)

For example, if K = 5, p¯n=0.05 for all obligors, and ωn,k= 0.2 for all obligors and all sectors, then the maximum number of obligors in the portfolio is approximately 75,000.

B. FFT-Based Algorithm

To describe the FFT-based algorithm proposed by Melchiori (2004), we will use the PGF of the portfolio loss for the model with Poisson default events and non-random default probabilities:

Gλ,FIXED(z)=exp[μ(j=1Jμjμzvj1)] .

Define the function Gl (z) as follows:

Gl(z)=j=1Jμjμzvj.

Note that Gl(z) can be interpreted as the PGF of a random variable l such that:

P(l=vj)=μjμπj.

Using the definition of Gl(z), Gλ,FIXED(z) can be rewritten as follows:

Gλ,FIXED(z)=exp{μ[Gl(z)1]}.(18)

The characteristic function of a random variable X is defined as :

ΦX(z)=EX[exp(iXz)]=GX[exp(iz)],0z<1,

where EX is the expectation under X ’s probability distribution, and GX is the PGF of X.

Using equation (18) and this definition, we obtain the following expression for the characteristic function of the normalized portfolio loss:

Φλ,FIXED(z)=Gλ,FIXED[exp(iz)]=exp{μ[Gl(exp(iz))1]}(19)=exp{μ[Φl(z)1]}

where Фl (z) is the characteristic function of l.

The characteristic function of a random variable can also be defined as the Fourier transform of its density. This fact and equation (19) suggest a simple and efficient algorithm for computing the portfolio loss distribution.

Let П denote the vector of probabilities representing the distribution of λ, and let π = (π1,…,πj,…,πJ) denote the vector of probabilities representing the distribution of l. П can be computed as follows:

=IFFT[exp{μ[FFT(π)1]}],

where FFT is the Fast Fourier Transform, and IFFT is the Inverse Fast Fourier Transform.

This algorithm can easily be extended to the model with random default probabilities and uncorrelated factors. It has been successfully applied to a portfolio containing 679,000 obligors.

XI. Numerical Examples Using the Credit Risk Toolbox

We have implemented in MATLAB all the models discussed in this paper. In this section, we present a short description of the capabilities of the toolbox, and then we offer some numerical examples. To illustrate the models presented in this paper, we use the same portfolio as in the Credit Risk+ demonstration spreadsheet from CSFP.12 This portfolio is presented in Table 1.

Table 1.

Credit Swiss Financial Product Reference Portfolio

(In percent)

article image

In order to run the basic model with fixed probabilities, whether the distribution of the default events is assumed to be Bernoulli or Poisson, one needs to have a set of individual exposures and a set of individual default probabilities.13 These correspond to the first two columns in Table 1. If data are only available in aggregate form for different classes of obligors, the programs can still be used by making additional assumptions. For example, obligors can be classified according to exposure, rating, or type (corporate vs. individual, for instance). When the average exposure and the average default probability are the only data available in each class, then the program assumes that all obligors in a given class have the same exposure and the same default probability. Computation time for the model with Bernoulli default events and fixed probabilities is determined by three variables: the number of obligors, the total normalized exposure, and the granularity of the exposure bands. The last two factors can be adjusted to reduce computation time, at the expense of accuracy. Generally speaking, the convolution can be efficiently computed in MATLAB as long as the number of obligors does not exceed a few thousands. This model has the advantage of being the most accurate when the default probabilities can be treated as non-random. As it will be shown later on in this section, this is an important consideration when the default probabilities are relatively high. The model of Section V with fixed probabilities and Poisson default events has been implemented using both algorithms presented in Section X.

When running the Credit Risk+ model as described in Sections VI and VII, one needs to have available the mean individual default probabilities, the standard deviations of the individual default probabilities, and the matrix of weights, representing the exposure of each obligor n to each factor k. As mentioned above, if individual data are not available, the programs can use aggregate data for different classes of obligors by making additional assumptions. Credit Risk+ has also been implemented using both algorithms presented in Section X.

When running the Credit Risk+ model with its extension as described in Section VIII, in addition to the data required to implement Credit Risk+, this model requires a value for the inter-sector covariance. Thus, one needs to have available the mean individual default probabilities, the standard deviations of the individual default probabilities, the matrix of weights, and a value for the inter-sector covariance. As mentioned above, if individual data are not available, the programs can use aggregate data for different classes of obligors by making additional assumptions. The model with correlated factors of Section VIII has been implemented using the numerically stable recursive algorithm. Consequently, it presents a limitation on the total number of obligors it can handle.14

The exposures in Table 1 correspond to the En s in this paper; the mean default rates correspond to pn in the case with non-random default probabilities, and to p¯n in the case with random default probabilities; the standard deviations correspond to the values of σn in the models with random probabilities.

The toolbox treats the factors as latent variables and the values for σk are estimated from the values of σn. Two estimation methods are compared: least squares (LS) on one hand, and the weighted averages used by CSFP on the other hand.

The standard deviations and the sector weights are only required to implement the models with random default probabilities; the models with non-random default probabilities only use the exposures and the mean default probabilities. Also, in general, individual exposures and default probabilities are not required to run the model; aggregate data per class of obligor can be used instead by making additional assumptions.

The loss distribution for the CSFP sample portfolio has been computed using four models. The assumptions underlying these models are summarized in Table 2.

Table 2.

Summary of the Assumptions Used in the Different Models

article image

Model 3 is CSFP’s Credit Risk+. By default, the CSFP implementation treats sector 1 as a special sector representing diversifiable idiosyncratic risk. This cannot be done in the model with correlated factors. Therefore, all the models discussed in this section treat sector 1 as a regular sector (which is also an option in CSFP’s Credit Risk+).

The portfolio loss distribution was also computed by Monte Carlo simulation under the following assumptions: Bernoulli default events, random default probabilities, and independent Gamma-distributed factors, with σk2s estimated by least squares.

This loss distribution computed by Monte Carlo simulation constitutes a benchmark. It can be used to assess the impact of the Poisson approximation and of the σk2 estimation method on the models’ accuracy.

A. Effects of Poisson Approximation: Non-Random Default Probabilities

To assess the impact of the Poisson approximation, we first compare models 1 and 2. The only difference between these two models is the distribution of default events: Bernoulli for model 1, Poisson for model 2. Figure 1 and Figure 2 present the portfolio’s loss distributions for these two models, and the VaR at the 99 percent level for each model.

Figure 1.
Figure 1.

Model 1: Fixed Probabilities, Bernoulli Defaults

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

Figure 2.
Figure 2.

Model 2: Fixed Probabilities, Poisson Defaults

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

The VaR is 8.67 percent larger in the model with Poisson defaults than in the model with Bernoulli defaults. This is the error introduced by the Poisson approximation for this particular portfolio with 25 obligors and with default probabilities ranging from 3 percent to 30 percent.

The Poisson approximation generally results in an overestimation of the VaR. This result has been observed for portfolios with very different structures.

An additional numerical experiment was performed to illustrate the relationship between the magnitude of the default probabilities and the error due to the Poisson approximation. This experiment uses the same exposures as in the CSFP portfolio, but it assumes that all obligors have the same default probability. The ratio between the VaRb, when the defaults are assumed to follow a Bernoulli distribution, and the VaRp, when the defaults are assumed to follow a Poisson distribution, (VaRb/VaRp), is computed for multiple values of the common default probability, ranging from 1 to 30 percent. The results of this experiment are presented in Figure 3. One can see that VaRb/VaRp decreases steadily from 1 to 0.86 as the common default probability increases from 1 to 30 percent. In other words, as could be expected from the derivation of the Poisson approximation, the error due to this approximation increases with the magnitude of the default probability.

Figure 3.
Figure 3.

Ratio of Bernoulli VaR to Poisson VaR for CSFP Portfolio

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

B. Random Default Probabilities, Uncorrelated Factors

This section compares the loss distributions for models 3a and 3b to the loss distribution computed by Monte Carlo simulation.

Models 3a and 3b only differ by the method used to estimate the σk s: model 3a uses the CSFP weighted average; model 3b uses least squares. With CSFP’s portfolio, the σk2 s estimated by least squares are all strictly positive. Both models assume Poisson default events and random default probabilities driven by Gamma distributed factors.

The Monte Carlo simulation was used to estimate the loss distribution of a model with random default probabilities driven by Gamma-distributed factors, but with Bernoulli default-events. The Monte Carlo simulation was required because there is no analytical solution for the loss distribution when Bernoulli defaults are combined with random default probabilities.

To perform the Monte Carlo simulation, the σk s were estimated from the σn s using least squares. 5000 random draws of the γk s were then performed to determine the pn s. For each set of γk s, 5000 random draws of the Dn s (the Bernoulli random variables representing default events) were then generated. Therefore, overall, 25 million random combinations were used for the Monte Carlo simulation. The loss distribution computed by Monte Carlo simulation is presented in Figure 4. The loss distributions for models 3a and 3b are presented in Figures 5 and 6, respectively.

Figure 4.
Figure 4.

Bernoulli Defaults, Random Probabilities, Monte Carlo Simulation

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

Figure 5.
Figure 5.

Model 3a: Poisson Defaults, Uncorrelated Factors, Weighted Average

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

Figure 6.
Figure 6.

Model 3b: Poisson Defaults, Uncorrelated Factors, Least Squares

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

The VaR for model 3b is 9 percent higher than the VaR computed by Monte Carlo simulation. This is more evidence of the fact that the Poisson estimation results in an overestimation of the VaR: the only difference between model 3 and the model used for the Monte Carlo simulations is the distribution of default events.

The VaR in model 3a (which does not use a rigorous method to estimate the σn s) is only 1 percent higher than the VaR computed by Monte Carlo simulation. This result is actually not surprising. Using weighted averages of the σn s to estimate the σk s leads to an underestimation of the σk s, and hence, to an underestimation of the VaR. However, this underestimation partially offsets the overestimation resulting from the Poisson approximation. Overall, using the simple method suggested by CSFP to estimate the σk s gives a value for the VaR that is very close to the value computed by Monte Carlo simulation.

Note that the VaR is higher in all the models with random default probabilities than in the models with non-random probabilities. This reflects the additional risk resulting from the uncertainty concerning the default probabilities.

C. Random Default Probabilities, Correlated Factors

In this section we present the portfolio loss distribution computed using model 4, that is, using the model with random default probabilities and correlated factors described in Section VIII.

The only difference between model 4 and model 3a is the presence of the common factor Γ. This common factor affects the distributions of the γk s. In particular, it introduces some correlation between these factors.

The portfolio loss distribution was computed for two different values of the variance of Γ (which is also the covariance between any two factors): 0.1 and 0.2. The results of these numerical experiments are presented in Figure 7 and Figure 8.

Figure 7.
Figure 7.

Model 4: Correlated Sectors, Inter-Sector Covariance = 0.1

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

Figure 8.
Figure 8.

Model 4: Correlated Sectors, Inter-Sector Covariance = 0.2

Citation: IMF Working Papers 2006, 134; 10.5089/9781451863949.001.A001

Not surprisingly, the VaR is higher in model 4 than in model 3a, and it increases with the variance of Γ. This reflects the additional risk of incurring a large loss resulting from the positive inter-sector correlation, as well as the increased uncertainty concerning the default probabilities.

XII. Conclusion

Each of the models presented here has specific features that make it useful in a particular situation. The basic model with known probabilities and Bernoulli-distributed default events is useful when there is little uncertainty concerning default probabilities, when default probabilities are relatively high, and when the portfolio does not contain more than a few thousand obligors.

At the cost of some approximations, Credit Risk+ and its extensions provide quasi-instantaneous solutions—even for very large portfolios—when default probabilities are influenced by a number of random latent factors. The alternative to using these models is to perform time-consuming Monte Carlo simulations. For our sample portfolio, the results of Credit Risk+ are very close to those of Monte Carlo simulations, even though this portfolio only contains 25 obligors with default probabilities as high as 30 percent.

Therefore, this paper provides a toolbox that can be used in the Financial Sector Assesment Program to determine several credit risk measures, including expected losses and credit VaR. The latter is the fundamental risk measure used to determine the economic capital required by a certain portfolio. This measure will play an increasingly important role in the Basel II framework to be used in the IMF’s surveillance activities. In fact, in the future, the IMF will increasingly face the need to understand how the gap between regulatory and economic capital could be bridged. The instruments presented in this paper add a rigorous quantitative dimension to this process.

APPENDIX I

Probability and Moment Generating Functions

Consider a discrete random variable X that can take non-negative integer values. The Probability Generating Function (PGF) of X is the function Gx defined by

GX(z)=E(zx)=x=0P(X=x)zx,

with 0 ≤ z < 1,if z is real, and |z| < 1, if z is complex. Gx (z) is simply a polynomial whose coefficients are given by the probability distribution of X. A PGF uniquely identifies a probability distribution: if two probability distributions have the same PGF, then they are identical.

Consider a second discrete random variable Y that can take non-negative integer values. If X and Y are independent, then the PGF of X + Y is given by

GX+Y(z)=GX(z)GY(z).

The Moment Generating Function (MGF) of X is the function defined by

MX(z)=E(ezX)=x=0P(X=x)ezX.

As its name indicates, the MGF of X can be used to compute the moments of X. The m -th moment of X about the origin is given by the m -th derivative of Mx valued at 0. This implies in particular that E(X)=MX(0) and Var(X)=MX(0)[MX(0)]2.

The joint MGF of two random variables X and Y is defined as:

MX,Y(z1,z2)=E(ez1X+z2Y),

where z1 and z2 are two auxiliary variables with the same properties as z.

If X and Y are two independent random variables, then the MGF of X + Y is given by

MX+Y(z)=MX(z)MY(z),

and the joint MGF of X and Y becomes:

MX,Y(z1,z2)=E(ez1X)E(ez2Y)=MX(z1)MY(z2).

If z1 = z2, then MX,Y (z1,z2) = MX+Y (z).

References

  • 1

    Austrian Financial Market Authority and Oesterreichische Nationalbank, 2004, “New Quantitative Models of Banking Supervision,” Vienna.

    • Search Google Scholar
    • Export Citation
  • 2

    Crouhy, Michel, Dan Galai and Robert Mark, 2000, “A Comparative Analysis of Current Credit Risk Models,” Journal of Banking & Finance, Vol. 24, pp. 59117.

    • Search Google Scholar
    • Export Citation
  • 3

    CSFP, 1997, “Credit Risk+: A Credit Risk Management Framework,” Credit Suisse First Boston.

  • 4

    Giese, Gotz, 2003, “Enhancing Credit Risk+,” Risk, Vol. 16, No. 4, pp. 7377.

  • 5

    Gordy, Michael B., 2002, “Saddlepoint Approximation of Credit Risk+,” Journal of Banking and Finance, 26, pp. 13351353.

  • 6

    Haaf, Hermann, Oliver Reiss and John Schoenmakers, 2003, “Numerically Stable Computation of Credit Risk+,” working paper, Weierstrass Institute, Berlin.

    • Search Google Scholar
    • Export Citation
  • 7

    Koyluoglu, H. Ugur and Andrew Hickman, 1998, “A Generalized Framework for Credit Risk Portfolio Models,” unpublished working paper.

  • 8

    Melchiori, Mario R., 2004, “Credit Risk+ by FFT,” working paper, Universidad Nacional del Litoral, Santa Fe, Argentina

2

See Austrian Financial Market Authority and Oesterreichische Nationalbank (2004).

3

Probability concepts are reviewed in the appendix.

4

In a portfolio with high variation of the exposure levels, the smallest exposure could be chosen as the normalization factor.

5

See Section X for more details.

6

See Section XI for an example.

7

An approximate method for computing the correlation matrix of individual default events is presented in CSFP (1997).

9

A criterion to determine whether the Panjer recursion can be applied is provided below.

10

Gordy (2002) presents a third algorithm based on saddlepoint approximation of the loss distribution. That algorithm only computes the cumulative distribution function of the loss distribution—not the probability distribution function. Furthermore, it is much more accurate for large portfolios than for small ones.

13

There are three main approaches to estimating the probabilities of default. One approach is to use historical frequencies of default events for different classes of obligors in order to infer the probability of default for each class. Alternatively, econometric models such as logit and probit could be used to estimate probabilities of default. Finally, when available, one could use probabilities implied by market rates and spreads.

14

See Section X, subsection A for more details.

Review and Implementation of Credit Risk Models of the Financial Sector Assessment Program (FSAP)
Author: Kexue Liu, Jean Salvati, Mr. Renzo G Avesani, and Mr. Alin T Mirestean