Quantitative properties of sovereign default models: solution methods matter
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

Contributor Notes

We study the sovereign default model that has been used to account for the cyclical behavior of interest rates in emerging market economies. This model is often solved using the discrete state space technique with evenly spaced grid points. We show that this method necessitates a large number of grid points to avoid generating spurious interest rate movements. This makes the discrete state technique significantly more inefficient than using Chebyshev polynomials or cubic spline interpolation to approximate the value functions. We show that the inefficiency of the discrete state space technique is more severe for parameterizations that feature a high sensitivity of the bond price to the borrowing level for the borrowing levels that are observed more frequently in the simulations. In addition, we find that the efficiency of the discrete state space technique can be greatly improved by (i) finding the equilibrium as the limit of the equilibrium of the finite-horizon version of the model, instead of iterating separately on the value and bond price functions and (ii) concentrating grid points in asset levels at which the bond price is more sensitive to the borrowing level and in levels that are observed more often in the model simulations. Our analysis questions the robustness of results in the sovereign default literature and is also relevant for the study of other credit markets.

Abstract

We study the sovereign default model that has been used to account for the cyclical behavior of interest rates in emerging market economies. This model is often solved using the discrete state space technique with evenly spaced grid points. We show that this method necessitates a large number of grid points to avoid generating spurious interest rate movements. This makes the discrete state technique significantly more inefficient than using Chebyshev polynomials or cubic spline interpolation to approximate the value functions. We show that the inefficiency of the discrete state space technique is more severe for parameterizations that feature a high sensitivity of the bond price to the borrowing level for the borrowing levels that are observed more frequently in the simulations. In addition, we find that the efficiency of the discrete state space technique can be greatly improved by (i) finding the equilibrium as the limit of the equilibrium of the finite-horizon version of the model, instead of iterating separately on the value and bond price functions and (ii) concentrating grid points in asset levels at which the bond price is more sensitive to the borrowing level and in levels that are observed more often in the model simulations. Our analysis questions the robustness of results in the sovereign default literature and is also relevant for the study of other credit markets.

I. Introduction

Business cycles in small emerging economies differ from those in developed economies. Emerging economies feature higher, more volatile and countercyclical interest rates, higher output volatility, more countercyclical net exports, and higher consumption volatility relative to income volatility (see, for example, Aguiar and Gopinath (2007), Neumeyer and Perri (2005), and Uribe and Yue (2006)). The behavior of the domestic interest rate is considered an important factor that may account for these features (see, for example, Benjamin and Meza (2009), Neumeyer and Perri (2005), and Uribe and Yue (2006)). Thus, a state-dependent interest rate schedule is commonly used in emerging economy models. Some studies assume an exogenous interest rate schedule.1 In contrast, models of sovereign default provide microfoundations for the interest rate schedule based on the risk of default. Aguiar and Gopinath (2006) and Arellano (2008) were the first studies to extended the model in Eaton and Gersovitz (1981) and used it for the analysis of business cycles in emerging economies.2 The model studied by Aguiar and Gopinath (2006) and Arellano (2008) needs to be solved using numerical methods. We show how the simulated behavior of the interest rate generated by the model can be significantly affected by approximation errors and discuss the performance of different numerical methods.

Aguiar and Gopinath (2006) and Arellano (2008) consider a small open economy that receives a stochastic endowment stream of a single tradable good. The government’s objective is to maximize the expected utility of private agents. Each period, the government makes two decisions. First, it decides whether to default on previously issued debt. Second, it decides how much to borrow or save. The government can borrow (save) by issuing (buying) one-period non-contingent bonds that are priced in a competitive market inhabited by risk-neutral investors. The cost of defaulting is given by an endowment loss and exclusion from capital markets.

Aguiar and Gopinath (2006) and Arellano (2008) solve the model using the discrete state space technique (hereafter referred to as DSS), which is also used in several other default studies. That is, they discretize the stochastic process for the endowment and restrict the sovereign to choose the optimal borrowing level from a discrete set of points. We solve the model using DSS with different grid specifications and using two interpolation methods: one approximates the value functions as the sum of Chebyshev polynomials and the other one approximates them using cubic splines. Using interpolation methods enables us to let the sovereign choose its optimal borrowing level from a continuous set and to allow for endowment realizations that do not lie on the grid.

While the potential for DSS approximation errors to influence simulation results is a theoretical possibility, it has not been established whether these errors may be significant enough to misguide the conclusions of the research agenda. We find that, to generate reliable results, the DSS technique requires a significantly larger number of grid points than the ones used in Aguiar and Gopinath (2006) and Arellano (2008). For instance, the standard deviation of the interest rate spread (the difference between the yields of government bonds and the yields of US government bonds) in the simulations of the model is less than half of the values they report. In addition, when we solve Aguiar and Gopinath (2006) using more accurate methods, the correlation between the spread and income is around -0.6 in their parameterization with shocks to the income level and 0.1 in their parameterization with shocks to the growth rate of income. In contrast, Aguiar and Gopinath (2006) report that this correlation is 0.5 in the first parameterization and -0.03 in the second one. Thus, our results cast doubt on their conclusion that income processes with shocks to the growth rate help models of sovereign default generate a countercyclical interest rate and, therefore, help replicate the positive correlation between the interest rate and the current account observed in the data.3

We report the relative performance of different numerical methods. We show that we are able to obtain robust results using cubic spline interpolation and Chebyshev collocation, and that the results obtained using DSS converge toward the ones obtained using interpolation methods as the number of DSS grid points increases. We find that using DSS with evenly spaced grid points (as done in most default studies) is significantly more inefficient than using interpolation methods. For instance, when solving the model for one of the parameterizations in Aguiar and Gopinath (2006), it takes less than 20 minutes to find a solution that is not affected by spurious spread volatility when the model is solved using cubic splines. It takes over 45 hours to find such a solution using DSS with evenly spaced grid points.

Our findings also indicate that DSS inefficiencies are less significant for parameterizations of the model that display a bond price function that is less sensitive to the borrowing level. Thus, DSS inefficiencies are less severe when the economy is assumed to be hit with shocks to the growth rate of income and even less so when the parameterization of the output cost of defaulting coincides with the one in Arellano (2008).4 This indicates that the efficiency of DSS can be improved by using grids for asset levels that concentrate points at levels for which the bond price is more sensitive to the borrowing level and at levels that are observed more often in the model simulations. We show this is true for the model studied in Aguiar and Gopinath (2006) and Arellano (2008).

In addition, we document that the DSS computation time can be decreased significantly by using a one-loop algorithm that iterates simultaneously on the value and the bond price functions instead of using an algorithm with two loops: one for the value functions and one for the bond price function. For example, we find that using DSS with a one-loop algorithm takes 31 seconds to solve for the baseline model in Arellano (2008), while using the two-loop algorithm takes 182 seconds. The comparison was performed using the convergence criteria and grid specifications that replicate her results. That difference in computation time would become more significant if one wanted to use the simulated method of moments to calibrate the model (as Arellano 2008 and many other default studies do) or if one wanted to use finer grids to mitigate approximation errors.

Even though our analysis focuses on the model studied by Aguiar and Gopinath (2006) and Arellano (2008), our findings may be relevant for other extensions of the baseline model. For example, we find that it is computationally costly to eliminate the significant distortions that DSS introduces in the behavior of the interest rate spread for the models presented in Hatchondo and Martinez (2009) and Hatchondo et al. (2007, 2009). Our analysis is also significant for the study of other credit markets. In quantitative studies of default, computation power is often a binding constraint that limits researchers’ ability to study more interesting frameworks.

The rest of the article proceeds as follows. Section 2 presents the model. Section 3 presents the parameterization we use. Section 4 discusses the computation. Section 5 presents the results we obtain with DSS and interpolation methods. Section 6 discusses the robustness of the results reported in Aguiar and Gopinath (2006) and Arellano (2008). Section 7 concludes.

II. The Model

We solve the model studied by Aguiar and Gopinath (2006) and Arellano (2008). They consider a small open economy that receives a stochastic endowment stream of a single tradable good. The endowment yt may be hit by a transitory and a permanent shock. Namely,

yt=AeztΓt, where A is a constant, zt is the current realization of the transitory component, and Гt denotes the current realization of the permanent component.

The variable zt follows an AR(1) process with long-run mean μz and autocorrelation coefficient | ρz | < 1:

zt=(1ρz)μz+ρzzt1+εtz, where εtzN (0,σz2).

The stochastic process of the permanent component is represented by Гt = gtГt-1, where gt denotes the trend shock and ln (gt)=(1ρg) (ln (μg)m)+ρgln (gt1)+εt, with |ρg| < 1, εtN (0,σg2), and m=12σg21ρg2.

The government’s objective is to maximize the expected present discounted value of the representative agent’s future utility. The representative agent has CRRA preferences over consumption: u(c)=c1γ11γ, where γ denotes the coefficient of relative risk aversion.

The government makes two decisions in every period. First, it decides whether to refuse to pay previously issued debt. Defaults imply a total repudiation of government debt. Second, the government decides how much to borrow or save for the following period.

Aguiar and Gopinath (2006) and Arellano (2008) assume that there are two costs of defaulting. First, the country is excluded from capital markets. In each period after the default period, the country regains access to capital markets with probability ψ ∊ [0, 1]. Second, if a country has defaulted on its debt, it faces an “output loss” of φ (y) in every period in which it is excluded from capital markets.

The government can choose to save or borrow using one-period bonds. The bond price is determined as follows. First, the government announces how many bonds it wants to issue—each bond consists of a promise to deliver one unit of the good in the next period. Then, foreign lenders offer a price at which they are willing to purchase these bonds. Finally, the government sells the bonds to the lenders who offer the highest price. Lenders can borrow or lend at the risk-free rate r, are risk neutral, and have perfect information regarding the economy’s endowment. Let b denote the government’s current position in bonds. A negative value of b denotes that the country was an issuer of bonds in the previous period. In equilibrium, lenders offer a price

q (b,z,Γ,g)=11+r [1 d(b,z,gΓ,g) FZ (dz | z) FG (dg | g)](1)

that satisfies their zero-profit condition when the government issues b bonds, and the optimal default rule is represented by the indicator function d (b,z, Г,g). The default rule takes a value of 1 if it is optimal for the government to default, and takes a value of 0 otherwise.

Let FZ and FG denote the cumulative distribution functions for z and g. The value function of a sovereign that has access to financial markets is given by

V(b,z,Γ,g)=maxd{0,1} {(1d)V0(b,z,Γ,g)+dV1(z,Γ,g)},(2)

where

V1 (z,Γ,g)=u(yϕ(y))+β  [ψV(0,z,gΓ,g)+(1ψ)V1(z,gΓ,g)] FZ (dz | z) FG (dg | g)(3)

denotes the value function of an excluded sovereign, and

V0(b,z,Γ,g)=maxb {u(y+bq(b,z,Γ,g)b)+β  V(b,z,gΓ,g)FZ (dz | z) FG (dg | g)}(4)

denotes the Bellman equation when the sovereign has decided to pay back its debt.

Definition 1 A recursive equilibrium consists of the following elements:

  1. A set of value functions V (b, z, Г, g), V1 (z, Г, g), and V0 (b, z, Г, g);

  2. A set of policies for asset holdings b’ (b,z, Г, g) and default decisions d (b, z, Г, g); and

  3. A bond price function q (b’,z, Г, g), such that

    1. V (b,z, Г,g), V1 (z, Г,g), and V0 (b,z, Г,g) satisfy functional equations (2), (3), and (4), respectively;

    2. the default policy d (b,z, Г, g) solves problem (2), and the policy for asset holdings b’ (b,z, Г,g) solves problem (4); and

    3. the bond price function q (b′,z, Г,g) is given by equation (1).

III. Parameterization

We solve the model for three parameterizations. The first two parameterizations are the ones considered by Aguiar and Gopinath (2006), who assume that ϕ (y) = λy. The third parameterization is the one considered by Arellano (2008), who assumes that

ϕ (y)={yλif y>λ0if yλ.(5)

The first two parameterizations correspond to Model I and Model II in Aguiar and Gopinath (2006). The first one corresponds to the case in which the economy is hit only with transitory shocks to the endowment level. The second one corresponds to the case in which the economy is hit only with shocks to the growth rate of income. The case considered in Arellano (2008) is denoted as Model III. Each period corresponds to a quarter. Parameter values are specified in Table 1.

Table 1:

Parameter values. Model I corresponds to the parameterization with only transitory shocks in Aguiar and Gopinath (2006). Model II corresponds to the parameterization with only trend shocks in Aguiar and Gopinath (2006). Model III corresponds to the parameterization in Arellano (2008).

article image

IV. Computation

We solve the model numerically using value function iteration. The algorithms find the value functions V0 and V1. Following Aguiar and Gopinath (2006), we recast the Bellman equations in de-trended form in order to find the solutions for Models I and II. In those cases, all variables are normalized by μgyt–1. Since the government’s objective function may not be globally concave, when we solve the model using interpolation methods, we first find a candidate value for the optimal borrowing level using a global search procedure. That candidate value is then used as an initial guess in a non-linear optimization routine. When using interpolation methods, we use a first-order Taylor approximation to evaluate the value functions at endowment and asset levels outside the grids. Following previous default studies, we do not extrapolate when we use DSS. A more detailed explanation of the algorithm is presented in the appendix. Codes were compiled using Fortran 90 and were run in serial mode on a Unix platform using Intel Xeon 5160 processors with a speed of 3.0 GHz.

Table 2 reports the grid specifications used in this paper. In order to compare the performance of different numerical methods, we report results obtained using various DSS grid specifications, one grid for Chebyshev collocation, and one grid for cubic spline interpolation. In Section G. we show that the results obtained using Chebyshev collocation and spline interpolation are robust to using more grid points.

Table 2:

Grid specifications. The second column reports the grid specifications used in the codes that Aguiar and Gopinath, and Arellano made available. We could replicate the results presented in their papers using those grids. The third, fourth, and fifth columns describe the DSS grids used to illustrate how the imprecisions introduced by DSS are attenuated as the number of grid points increases. The last two columns describe the grid specifications used when the model is solved using Chebyshev collocation and cubic spline interpolation.

article image

We either use evenly spaced DSS grids (as most default studies do) or we concentrate evenly spaced asset points in an intermediate range of the DSS grids, as noted in Table 2. We also use evenly spaced grids when we solve the model with cubic spline interpolation.

When solving Model III with interpolation methods, we use two grids for endowment levels, each with the same number of points. We use one grid for endowment levels lower than λ, and one for endowment levels higher than λ. Note that the derivative of the output cost of defaulting with respect to output presents a discontinuity at y = λ (see equation 5). Consequently, the function V1 displays a kink at y = λ.

Aguiar and Gopinath (2006) and Arellano (2008) do not report the exact DSS grid specifications they use, but we are able to infer that information from their codes. The second column of Table 2 presents the grids we use to replicate their results. We refer to these grids as the “original” grids.

V. Results

We first document how computation time can be decreased by using a one-loop algorithm that iterates simultaneously on the value and bond price functions. Then, we present simulation results and computation times obtained using the grids introduced in Table 2 (and one-loop algorithms). We show that the results we obtain using Chebyshev collocation are consistent with the results we obtain using cubic spline interpolation and that our DSS results converge toward our interpolation results as we increase the number of grid points and the width of the endowment grid. We later discuss inaccuracies introduced by inappropriate DSS grids. We also discuss the inefficiency of DSS compared with interpolation methods. At the end of the section, we show that our results with interpolation methods appear to be robust to increases in the number of grid points and we conduct the test proposed by den Haan and Marcet (1994) for evaluating the accuracy of numerical solutions.

A. One-loop and two-loop algorithms

In most default studies, models are solved using DSS and two loops: the outside loop iterates on the bond price function and the inside loop iterates on the value functions. Once convergence is attained in the value functions, the bond price function is updated using the optimal default decisions implied by the value functions.

We find that the computation time can be decreased significantly by using a one-loop algorithm that iterates simultaneously on the value and the bond price functions. For example, using DSS with our original grids for Model III, the one-loop algorithm takes 31 seconds and the two-loop algorithm takes 182 seconds to converge.5 The computation time per value function iteration is smaller with the two-loop algorithm (0.11 seconds vs. 0.13 seconds) because the bond price function is not updated in every iteration of the value function. But the number of iterations of the value function required by the two-loop algorithm to converge is significantly higher. The difference in computation time between the two algorithms would become more significant if we wanted to use the simulated method of moments to calibrate the model, as many default studies do.

In the remainder of the paper, we only use solutions obtained with one-loop algorithms. This computation strategy, along with using the solution of the last period of the finite horizon version of the model as an initial guess, implies that the algorithm approximates the equilibrium as the limit of the equilibrium of the finite-horizon economy. We only deviate from this approach when we use the finest grid specifications described in Table 2. In those cases, we use as the initial guess the value functions found using the finer grids presented in the fourth column of Table 2. (We use linear interpolation to evaluate these functions at points that do not lie on the grid.)

B. Simulations

Table 3 reports business cycle statistics obtained in the simulations and the computation time for each exercise. The logarithm of income and consumption are denoted by y and c, respectively. The trade balance (output minus consumption, TB) is expressed as a fraction of income (Y), and the interest rate spread (margin of extra yield over the risk-free rate, Rs) is expressed in annual terms. Standard deviations are denoted by σ and are reported in percentage terms; correlations are denoted by ρ.

Table 3:

Simulation results and computation time for different DSS grids and interpolation methods.

article image

The statistics for Models I and II were computed following Aguiar and Gopinath (2006). We use 500 simulation samples of 1,500 periods each.6 In order to eliminate the effect of initial conditions, we use only the last 500 periods of each sample to compute the moments reported in the table. We detrended each variable using the Hodrick-Prescott filter with a smoothing parameter of 1,600 and then computed standard deviations and correlations using the detrended series. Statistics reported in Table 3 correspond to the average value of each moment across 500 samples of 500 periods.

Similarly, the moments for Model III were computed following Arellano (2008). We simulate the model and extract samples that satisfied the following criteria: i) a default is declared immediately after the end of the sample, ii) the sample contains 74 periods, and iii) the last exclusion period was observed at least two periods before the beginning of the sample. Statistics reported in Table 3 correspond to the average value of each moment across 2,000 samples of 74 periods (Arellano 2008 uses only 100 samples).

Table 3 shows that the results we obtain using Chebyshev collocation are consistent with the results we obtain using cubic spline interpolation. In addition, Table 3 shows that the results obtained using DSS converge to the ones we obtain using interpolation methods when the model is solved using DSS with (i) wider endowment grids, (ii) more endowment grid points, and (iii) more asset grid points. We explain next how each of these modifications to DSS grids helps mitigate approximation errors.

C. The width of the endowment grid

Figure 1 shows that the width of the endowment grid may affect the computation of the government’s borrowing decision. We chose Model I to construct Figure 1 because this is the parameterization for which we found the highest sensitivity of the results to an increase in the width of the endowment grid. We use the original grid specification described in the second column of Table 2 as the starting point, and as we increase the width of the endowment grid we also increase the number of grid points so that the distance between endowment grid points remain constant. This allows us to control for distortions generated by using coarse grids. All functions were constructed using the original grid for asset positions.

Figure 1:
Figure 1:

Optimal savings as a function of income for DSS endowment grids of different width, for Model I, and for an initial debt level of 0.252 (the average debt observed in the simulations with the finest grid specification).

Citation: IMF Working Papers 2010, 100; 10.5089/9781451982770.001.A001

Wider endowment grids enable the DSS algorithm to compute the true default probabilities and, therefore, the government’s true borrowing decision. For any borrowing level, the government will choose to default in the next period if the endowment falls below some threshold. A DSS algorithm with a narrow endowment grid would impute a zero default probability on borrowing levels such that the lowest value in the endowment grid is above those thresholds. This would introduce a downward bias in the default probability, which in turn would increase the value of having access to capital markets, and make defaults more costly. A higher cost of defaulting helps in sustaining higher borrowing levels in equilibrium. This may explain the higher borrowing levels for narrower endowment grids presented in Figure 1.

D. The number of endowment grid points

The discretization of income shocks may generate distortions in the behavior of the equilibrium interest rate spread. These distortions may be mitigated by using more endowment grid points. It is apparent from Table 3 that the dispersion of the spread computed with DSS decreases and converges toward the one computed with interpolation as the number of points in the endowment grid increases. To make this point clearer, Table 4 presents simulation results for Model III for the original grid specification and for two alternative grid specifications. One specification has the original asset grid and 10 times more grid points than the original grid for endowment levels.7 The other specification has the original endowment grid and 10 times more grid points than the original grid for asset levels. Table 4 indicates that, for Model III, the main problem with the results obtained with our original DSS grids is the insufficient number of points in the endowment grid. Keeping the number of points in the asset grid constant, a tenfold increase in the number of points in the endowment grid (from 21 to 211) reduces the standard deviation of the spread in the simulations from 6.20 to 3.84. In contrast, keeping the number of points in the endowment grid constant, a tenfold increase in the number of asset grid points (from 200 to 2000) only reduces the standard deviation of the spread in the simulations to 4.91.

Table 4:

Model III simulation results.

article image

Figure 2 illustrates the source of the imprecisions caused by using coarse grids for endowment levels when solving for Model III (similar figures could be constructed for other parameterizations of the model). The left panel of Figure 2 describes the zero-profit bond price as a function of the borrowing level when the endowment realization equals the unconditional mean of the endowment process. The bond price functions computed using DSS were constructed using the original grid for asset positions (200 points). The graph also presents the bond price function obtained using cubic splines, which is indistinguishable from the one we obtain using Chebyshev or DSS with fine grids. The figure shows that the discretization of the income shock introduces discontinuities in the bond price schedule, and that these discontinuities are more pronounced when a coarser grid for endowment levels is used. The zero-profit bond price schedule represents the set of combinations of issuance levels and bond prices the government can choose from. The discontinuities illustrated in Figure 2 imply that there are bond prices that are taken out from the government’s choice set. Note that these distortions could appear even without discretizing the set of borrowing levels the government can choose from.

Figure 2:
Figure 2:

Imprecisions caused by using coarse grids for endowment levels. The left panel illustrates the zero-profit bond price as a function of the borrowing level when the current endowment realization coincides with the unconditional mean of the endowment process (y = 10). The right panel illustrates the government’s objective as a function of its borrowing level. The left (right) vertical axis corresponds to the case in which b/y = −0.066 (b/y = −0.042).

Citation: IMF Working Papers 2010, 100; 10.5089/9781451982770.001.A001

The right panel of Figure 2 illustrates how the distortions in the bond price menu affect the optimal saving decision. The figure presents the government’s objective function and it shows that this function tends to be increasing with respect to the borrowing level for borrowing levels where the bond price function is flat (i.e., for levels such that the government can increase its borrowing without decreasing the bond price). The right panel of Figure 2 shows that this may introduce spurious convexities in the government’s objective function and, thus, it may distort the optimal saving levels.

E. The number of asset grid points

The statistics presented in Tables 3 and 4 make apparent that the results obtained using DSS depend on the number of asset grid points and converge toward the results computed with interpolation methods as the number of asset—and endowment—grid points increases. Figure 3 shows how the optimal savings and equilibrium bond prices obtained with DSS change as the number of grid points for asset levels increases. The figure considers the equilibrium functions derived for Model II, but the same rationale applies to Models I and III. As illustrated by the figures, for low enough growth rates, the government defaults and is excluded from capital markets, i.e., it borrows zero. Following Aguiar and Gopinath (2006), Figure 3 imputes the price of the risk-free bond when the country defaults and is excluded from capital markets.

Figure 3:
Figure 3:

Imprecisions caused by using coarse grids for asset levels. Model II optimal savings and bond prices accepted in equilibrium as a function of the trend shock. The graphs were computed using DSS with 1500 endowment grid points and asset grids of 400 and 5000 points. The initial asset position is assumed to be equal to −0.19.

Citation: IMF Working Papers 2010, 100; 10.5089/9781451982770.001.A001

In the left panel of Figure 3, the DSS borrowing level presents steps, that is, it does not always change when (the growth rate of) income changes. Figure 3 also shows that the steps become smaller as the number of grid points for asset levels increases.

In models of sovereign default, the increase in borrowing implied by an increase in income moderates the decrease in the interest rate implied by an increase in income. Consequently, when the discrete set of borrowing levels available to the government precludes adjustments in the borrowing level, interest rate movements are exacerbated. This is illustrated in the right panel of Figure 3, which illustrates the bond prices traded in equilibrium. The graph shows how the spurious spread movements generated by the discretization of asset levels can be mitigated by augmenting the number of points in the DSS asset grid. Note that the right panel of Figure 3 also shows that the correlation between income and spread paid in equilibrium may be contaminated with the spurious spread volatility introduced by using DSS with coarse grids.

F. Computation efficiency

As expected, Table 3 shows that we can mitigate the approximation errors implied by DSS as we increase the number of grid points. It also shows that, for the model considered in the paper, the number of DSS grid points needed to produce accurate results is such that it makes using DSS with an evenly spaced grid less efficient than interpolation methods.

Table 3 also illustrates how one can improve the performance of DSS by concentrating grid points in asset levels at which the bond price is more sensitive to the borrowing level, and in levels that are observed more often in the model simulations. To make this point clearer, Table 5 presents simulation results for Model III obtained using DSS grids with the same number of points but with different distributions of asset points. (In order to facilitate comparisons, we also include the results obtained using the original DSS grid and using interpolation methods.) The fourth column of Table 5 reports results obtained allocating 100 asset grid points between -0.03 and 0. Note that in order to attain the density of points in this intermediate range with an evenly distributed grid, it would be necessary to use 16,000 grid points. Table 5 shows that the computation time does not change significantly when we modify the distribution of asset grid points, and that DSS imprecisions are mitigated by concentrating asset points in the intermediate range. One disadvantage of using DSS with an uneven distribution of grid points is that grids have to be tailor-made for the model’s parameterization, which would make it more cumbersome to perform tasks such as calibrating the model using the simulated method of moments or conducting comparative static exercises.

Table 5:

Model III simulation results for different allocations of DSS asset grid points.

article image

In addition, Table 3 illustrates that the computation time is lower with spline interpolation than with Chebyshev collocation for the three parameterizations considered in the paper. In fact, for Model III, the computation time with Chebyshev collocation is higher than with our DSS coarse grids, which only produce small inaccuracies in the results. Recall that the discussion of the imprecisions implied by DSS presented in the previous subsections indicates that these imprecisions appear because the bond price is quite sensitive to the borrowing level. In Model III, the bond price is less sensitive to the borrowing level and, therefore, it is less difficult to mitigate the effect of the imprecisions implied by DSS.

G. Robustness of results obtained with interpolation methods

Table 6 illustrates that the results we obtain with interpolation methods reported in Table 3 are robust to increasing the number of grid points. The first (second) number in the pair characterizing a column is the number of points in the asset (endowment) grid.

Table 6:

Robustness of Chebyshev collocation and spline interpolation.

article image

H. A test of the accuracy of the numerical solutions

In this subsection we conduct the test proposed by den Haan and Marcet (1994) for evaluating the accuracy of numerical solutions. We conduct the test for each of the numerical solutions analyzed in this paper and summarized in Table 3. The test evaluates whether the Euler equation is satisfied in the simulations and is implemented using 5,000 samples of 1,500 periods each. We remove the first 10 periods of each sample, all periods in which the economy is excluded with the exception of periods in which a default is declared, and the first 10 periods after the end of an exclusion spell. We did not observe significant changes in results if more periods after the end of exclusion spells were removed from the samples.

den Haan and Marcet (1994) derive the asymptotic distribution of any weighted sum of residuals of the Euler equation under the null hypothesis that the Euler equation is satisfied in the simulations. The test consists of comparing that probability distribution with the distribution observed in the simulations. Table 7 summarizes the comparison using two statistics: the frequency of samples for which the weighted sum of residuals takes values within the 5 percent left (right) tail of the asymptotic distribution. Table 7 indicates that interpolation procedures approximate the equilibrium with reasonable accuracy (values are close to 5 percent).8

Table 7:

den Haan and Marcet’s test. Fraction of samples for which the statistic of den Haan and Marcet (1994) is below (above) the value at which a χ2 accumulates a probability of 5% (95%). The numbers in the second, third and fourth columns are based on a χ2 with one degree of freedom. The numbers in the last three columns are based on a χ2 with three degrees of freedom. We denote by h(xt) the vector of weights for the Euler-equation residuals.

article image

One might also conclude from Table 7 that DSS does not approximate the solution with reasonable accuracy. However, we found evidence suggesting that the main reason for the large discrepancies reported in Table 7 can be traced back to approximation errors in the calculation of the residuals of the Euler equation. One of the terms of the Euler equation depends on the derivative of the zero-profit bond price with respect to the borrowing level (see, for example, Hatchondo and Martinez 2009). We denote this derivative by q1. When the model is solved using DSS, the value of q1 evaluated at the ith component of the grid for asset positions and at the jth component of the grid for endowment shocks is approximated as

q1 (bi,yj)=q (bi+Δ,yj)q (biΔ,yj)bi+ΔbiΔ,(6)

with Δ = 1. We find that the sample distribution of the den Haan and Marcet’s statistic is quite sensitive to the value of Δ used to approximate q1. Furthermore, the value of Δ that generates the best results for the test depends on the model parameterization and grid configuration. The approximation of q1 is highly sensitive to the value of Δ because, as illustrated by Figure 2, typically the zero-profit bond price obtained with DSS presents steps. As the number of asset grid points increases, the steps become narrower but more frequent, so the local approximation of q1 does not necessarily become more accurate. We find that, even for our finest DSS grids (for which we obtained results very similar to those obtained with interpolation methods), the bond-price derivative is quite sensitive to the choice of Δ. Overall, we cannot conclude that the poor performance of the DSS solutions according to den Haan and Marcet’s test is due to the lack of accuracy in the approximation of the equilibrium.

VI. Robustness of findings in Aguiar and Gopinath (2006) and Arellano (2008)

This section discusses inaccuracies in the results presented by Aguiar and Gopinath (2006) and Arellano (2008). In order to do so, we compare key statistics from the simulations presented in those papers with the same statistics computed using spline interpolation and Chebyshev collocation (the latter are similar to statistics obtained using DSS with fine grids).9

The second and fifth columns of Table 8 present the statistics reported by Aguiar and Gopinath (2006) in Table 3 of their paper (page 77). The remaining columns present the same statistics computed using spline interpolation and Chebyshev collocation. Table 8 indicates that the co-movement between the spread and income reported by Aguiar and Gopinath (2006) is affected by inaccuracies introduced by inappropriate DSS grids. We find that the correlation between spread and income is -0.6 in Model I (with shocks to the income level) and 0.1 in Model II (with shocks to the growth rate of income). In contrast, Aguiar and Gopinath (2006) report that this correlation is 0.5 in Model I and -0.03 in Model II. Thus, our results cast doubt on their claim that with Model II, “Some improvements over Model I are immediately apparent. Both the current account and interest rates are countercyclical and positively correlated.... “(page 79 in Aguiar and Gopinath 2006). Our findings imply that the ability of the model to fit the data does not necessarily improve when one assumes an income process with shocks to the growth rate instead of a standard process with shocks to the level.

Table 8:

Simulation results in Aguiar and Gopinath (2006) (AG) and with interpolation.

article image

At the top of Table 9, we present the statistics reported in Table 4 (page 706) of Arellano (2008). The table also presents the same statistics computed using spline interpolation and Chebyshev collocation. Table 9 indicates that more than half of the spread volatility reported in Arellano (2008) is accounted for by inaccuracies introduced by inappropriate DSS grids. Figure 4 further illustrates the effects of these inaccuracies. The figure replicates the counterfactual exercise presented in Arellano (2008) on page 707. We feed Model III with the time series of the Argentine GDP between 1993 and 2001, and then compute the spread behavior predicted by the model. The behavior we compute using the original grids resembles the one computed by Arellano (2008), which displays a significantly higher spread prior to the default episode of 2001 than in the 1995 Tequila crisis. This is inconsistent with the spread behavior obtained using interpolation methods or DSS with our finest grid specification. Figure 4 also shows that the implied spread behavior obtained using DSS with the finest grid is indistinguishable from the behavior obtained using interpolation methods.

Table 9:

Simulation results in Arellano (2008) and with interpolation.

article image
Figure 4:
Figure 4:

Spread behavior. The endowment realization coincides with the time series of Argentine GDP between 1993 and 2001.

Citation: IMF Working Papers 2010, 100; 10.5089/9781451982770.001.A001

The imprecisions described in Table 9 and Figure 4 are caused by imprecisions in the approximations of the optimal policies. This is illustrated in Figure 5, which replicates the optimal saving rule and equilibrium interest rates described by Figures 3 and 4 in Arellano (2008) (pages 704 and 705). Figure 5 shows that the optimal saving policies and equilibrium interest rates obtained with interpolation methods or with DSS and a sufficiently dense grid specification are significantly different from the optimal saving rule and equilibrium interest rates obtained using the original grids.

Figure 5:
Figure 5:

Model III optimal savings and implied interest rate as functions of the initial asset position for y = 0.93 (y low) and y = 1.02 (y high).

Citation: IMF Working Papers 2010, 100; 10.5089/9781451982770.001.A001

VII. Conclusions

We show that the use of DSS with inappropriate grid specifications introduces approximation errors that contaminate the results presented by Aguiar and Gopinath (2006) and Arellano (2008). These imprecisions led Aguiar and Gopinath (2006) to conclude that income processes with shocks to the growth rate help models of sovereign default generate a countercyclical interest rate and, thus, help the baseline default model generate the positive correlation between the interest rate and the current account observed in the data. Besides, more than half of the spread volatility reported by Aguiar and Gopinath (2006) and Arellano (2008) results from approximation errors.

We also find that interpolation methods may be significantly more efficient than DSS for solving default models and that the inefficiency of DSS is more severe for parameterizations that feature a high sensitivity of the bond price to the borrowing level for the borrowing levels observed more often in the simulations. As in Aguiar and Gopinath (2006) and Arellano (2008), the models studied in the growing literature on sovereign default are usually solved using DSS with evenly spaced grid points and algorithms that use two loops. We show that the efficiency of DSS can be greatly improved by (i) using a one-loop algorithm and (ii) concentrating grid points in asset levels at which the bond price is more sensitive to the borrowing level, and in levels that are observed more often in the simulations.

References

  • Aguiar, M. and Gopinath, G. (2006). ‘Defaultable debt, interest rates and the current account’. Journal of International Economics, volume 69, 6483.

    • Search Google Scholar
    • Export Citation
  • Aguiar, M. and Gopinath, G. (2007). ‘Emerging markets business cycles: the cycle is the trend’. Journal of Political Economy, volume 115, no. 1, 69102.

    • Search Google Scholar
    • Export Citation
  • Arellano, C. (2008). ‘Default Risk and Income Fluctuations in Emerging Economies’. American Economic Review, volume 98(3), 690712.

    • Search Google Scholar
    • Export Citation
  • Athreya, K. (2002). ‘Welfare Implications of the Bankruptcy Reform Act of 1999’. Journal of Monetary Economics, volume 49, 15671595.

    • Search Google Scholar
    • Export Citation
  • Benjamin, D. and Meza, F. (2009). ‘Total Factor Productivity and Labor Reallocation: The Case of the Korean 1997 Crisis’. The B.E. Journal of Macroeconomics (Advances), volume 9, Article 31.

    • Search Google Scholar
    • Export Citation
  • Chatterjee, S., Corbae, D., Nakajima, M., and Ríos-Rull, J.-V. (2007). ‘A Quantitative Theory of Unsecured Consumer Credit with Risk of Default’. Econometrica, volume 75, 15251589.

    • Search Google Scholar
    • Export Citation
  • Cuadra, G., Sanchez, J. M., and Sapriza, H. (forthcoming). ‘Fiscal policy and default risk in emerging markets’. Review of Economic Dynamics.

    • Search Google Scholar
    • Export Citation
  • Cuadra, G. and Sapriza, H. (2008). ‘Sovereign default, interest rates and political uncertainty in emerging markets’. Journal of International Economics, volume 76, 7888.

    • Search Google Scholar
    • Export Citation
  • de Boor, C. (1977). A Practical Guide to Splines. Springer-Verlag.

  • den Haan, W. J. and Marcet, A. (1994). ‘Accuracy in simulations’. Review of Economic Studies, volume 61, 317.

  • Eaton, J. and Gersovitz, M. (1981). ‘Debt with potential repudiation: theoretical and empirical analysis’. Review of Economic Studies, volume 48, 289309.

    • Search Google Scholar
    • Export Citation
  • Hatchondo, J. C. and Martinez, L. (2009). ‘Long-duration bonds and sovereign defaults’. Journal of International Economics, volume 79, 117125.

    • Search Google Scholar
    • Export Citation
  • Hatchondo, J. C., Martinez, L., and Sapriza, H. (2007). ‘Quantitative Models of Sovereign Default and the Threat of Financial Exclusion’. Economic Quarterly, volume 93, 251286. No. 3.

    • Search Google Scholar
    • Export Citation
  • Hatchondo, J. C., Martinez, L., and Sapriza, H. (2009). ‘Heterogeneous borrowers in quantitative models of sovereign default.’ International Economic Review. Forthcoming.

    • Search Google Scholar
    • Export Citation
  • Li, W. and Sarte, P.-D. (2006). ‘U.S. consumer bankruptcy choice: The importance of general equilibrium effects’. Journal of Monetary Economics, volume 53, 613631.

    • Search Google Scholar
    • Export Citation
  • Livshits, I., MacGee, J., and Tertilt, M. (2008). ‘Consumer Bankruptcy: A Fresh Start’. American Economic Review, volume 97, 402418.

    • Search Google Scholar
    • Export Citation
  • Neumeyer, P. and Perri, F. (2005). ‘Business cycles in emerging economies: the role of interest rates’. Journal of Monetary Economics, volume 52, 345380.

    • Search Google Scholar
    • Export Citation
  • Schmitt-Grohé, S. and Uribe, M. (2003). ‘Closing small open economy models’. Journal of International Economics, volume 61, 163185.

    • Search Google Scholar
    • Export Citation
  • Uribe, M. and Yue, V. (2006). ‘Country spreads and emerging countries: Who drives whom?Journal of International Economics, volume 69, 636.

    • Search Google Scholar
    • Export Citation

A Appendix. Computational strategy

In this section we describe our computational strategy. For expositional simplicity, the discussion assumes that shocks affect only the endowment level and not the growth rate of the endowment.

When solving the model using Chebyshev collocation, the value functions V0 and V1 are approximated as a weighted sum of Chebyshev polynomials for all (b,y)  [b¯ b¯] × [y¯ y¯]. When b or y takes values outside the set [b¯ b¯] × [y¯ y¯], the value functions are approximated using a first-order Taylor approximation evaluated at the closest point in the set [b¯ b¯] × [y¯ y¯].

When solving the model using cubic spline interpolation, we first define evenly distributed grids of asset positions and endowment shocks. Those grid vectors and the matrices of values for V0 and V1 are used to compute the breakpoints and coefficients for the piecewise cubic representation using the routine CSDEC from the IMSL library. The routine is based on de Boor (1977), chapter 4. More precisely, when evaluating V0 at a point (b, y) in the set [b¯ b¯] × [y¯ y¯], we first interpolate over asset positions and compute the vector (V0(b,y1), …V0(b,yNy)), where Ny denotes the number of grid points for endowment shocks. Then, we interpolate over endowment positions to compute V0(b,y). As with Chebyshev collocation, when the asset position or the endowment shock takes values outside the minimum or maximum grid values, we evaluate the value functions using a first-order Taylor approximation. We use the not-a-knot condition to determine the value of the derivatives at the end points.

The algorithm used to solve for the equilibrium with interpolation methods works as follows. First, we specify initial guesses for V0 and V1. We use as initial guesses the continuation values at the last period of the finite-horizon version of the model, i.e., for values of (bi,yj) on the grid for asset levels and endowment shocks,

V0(0)(bi,yj)=u (yj+bi) andV0(1)(yj)=u (yjϕ(yj)).

Second, we solve the optimization problem defined in equations V-V0 for each point on the grid of asset levels and endowment shocks. In order to solve for the optimum, we first find a candidate value for the optimal borrowing level using a global search procedure. That candidate value is then used as an initial guess in the optimization routine UVMIF from the IMSL library. That routine uses a quasi-Newton method to find the maximum value of a function. Each time the borrower’s objective function is evaluated, it computes the expectation E (V(b’,y’ | y)) using Gauss-Legendre quadrature points and weights, and using V0(0) and V1(0) to approximate for the next-period continuation values. The bond price function q(b,y) is evaluated using the optimal default decision derived from V0(0) and V1(0). The solution found at each point on the grid for asset and endowment shocks is then used to compute the new continuation values V0(1) and V1(1).

Third, we evaluate whether the maximum absolute deviation between the new and previous continuation values is below 10–6. If it is, a solution has been found. If it is not, we repeat the optimization exercise using the new continuation values V0(1) and V1(1) to compute the expected value function at each grid point for asset and endowment levels and to evaluate the bond price function q(b, y) faced by the borrower. We repeat the procedure until the maximum absolute deviation between the new and previous continuation values is below 10–6.

Note that the algorithm only imposes differentiability on V0 and V1.10 The algorithm may very well capture discontinuities in the optimal saving rule (as illustrated in Figure 5) or kinks in the bond price function (as illustrated in Figure 2).

1

For comments and suggestions, we thank Huberto Ennis, Narayana Kocherlakota, Per Krusell, Enrique Mendoza, and our colleagues at the Federal Reserve Bank of Richmond. We also thank Brian Gaines and Elaine Mandaleris- Preddy for editorial support. Remaining mistakes are our own. The views expressed herein are those of the authors and should not be attributed to the IMF, its Executive Board, or its management, the Federal Reserve Bank of Richmond, or the Board of Governors of the Federal Reserve System.

2

The model analyzed in Aguiar and Gopinath (2006) and Arellano (2008) has been extended in various dimensions. See, for example, Cuadra et al. (forthcoming), Cuadra and Sapriza (2008), Hatchondo and Martinez (2009), and Hatchondo et al. (2007, 2009). The model used in these studies also share blueprints with the models used in quantitative studies of household bankruptcy—see, for example, Athreya (2002), Chatterjee et al. (2007), Li and Sarte (2006), and Livshits et al. (2008).

3

As explained by Aguiar and Gopinath (2006), shocks to the growth rate of income tend to make the bond price schedule that delivers zero expected profits less sensitive to the borrowing level. This could help generate more countercyclical spreads. However, we find that this effect is not significant when the government always chooses borrowing levels very close to those for which lenders charge the risk-free interest rate (as occurs in their simulations).

4

The main difference between the parameterizations in Aguiar and Gopinath (2006) and Arellano (2008) is that in Arellano (2008), the default punishment can be significantly more responsive to current endowment realizations. This feature helps reduce the sensitivity of the bond price to the Borrowing level.

5

We also compare the DSS computation time required by Aguiar’s and Gopinath’s (2006) Matlab code with the computation time required by our one-loop Fortran code, using the original grids. With only transitory shocks, thier code takes 16 minutes and 58 seconds to converage and our code takes 2 minutes and 8 seconds. With only trend shocks, thier code takes 7 minutes and 44 seconds to converage and our code takes 1 minute and 13 seconds.

6

Aguiar and Gopinath (2006) used samples of 10,000 periods, but we do not observe any difference in results when we use samples of 1,500 periods.

7

The choice of 211 instead of 210 points for the endowment grid is meant to force the grid to contain the unconditional mean of the endowment distribution. This is useful for computing Figure 2.

8

The larger discrepancies are observed in the last column of Table 7 for the right tail of the distribution. When the Euler-equation residuals in period t +1 are weighted by the vector [1, yt, bt], the correlation between the residuals and the residuals weighted by the endowment realization in the previous period is close to 0.99. The high co-linearity between these two series reduces the precision of the test.

9

Note that the results reported in Table 3 for the original grids resemble the ones reported in Aguiar and Gopinath (2006) and Arellano (2008). The largest differences between our results with the original grids and theirs appear in Model III. This is explained by a bug in the code used by Arellano (2008), where the post-default value function is computed without considering the possibility that the sovereign may regain access to capital markets in the next period. Once the value function is computed correctly, the default rate and mean spread increase while the debt-to-output ratio decreases.

10

For that reason, when solving Model III, we partition the grid for endowment shocks. The output cost assumed in Arellano (2008) displays a kink at y = χ, which generates a kink in the function V1.

Quantitative properties of sovereign default models: solution methods matter
Author: Mr. Leonardo Martinez, Horacio Sapriza, and Juan Carlos Hatchondo
  • View in gallery

    Optimal savings as a function of income for DSS endowment grids of different width, for Model I, and for an initial debt level of 0.252 (the average debt observed in the simulations with the finest grid specification).

  • View in gallery

    Imprecisions caused by using coarse grids for endowment levels. The left panel illustrates the zero-profit bond price as a function of the borrowing level when the current endowment realization coincides with the unconditional mean of the endowment process (y = 10). The right panel illustrates the government’s objective as a function of its borrowing level. The left (right) vertical axis corresponds to the case in which b/y = −0.066 (b/y = −0.042).

  • View in gallery

    Imprecisions caused by using coarse grids for asset levels. Model II optimal savings and bond prices accepted in equilibrium as a function of the trend shock. The graphs were computed using DSS with 1500 endowment grid points and asset grids of 400 and 5000 points. The initial asset position is assumed to be equal to −0.19.

  • View in gallery

    Spread behavior. The endowment realization coincides with the time series of Argentine GDP between 1993 and 2001.

  • View in gallery

    Model III optimal savings and implied interest rate as functions of the initial asset position for y = 0.93 (y low) and y = 1.02 (y high).