A Perspectiveon Predicting Currency Crises
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

Contributor Notes

Currency crises are difficult to predict. It could be that we are choosing the wrong variables or using the wrong models or adopting measurement techniques not up to the task. We set up a Monte Carlo experiment designed to evaluate the measurement techniques. In our study, the methods are given the right fundamentals and the right models and are evaluated on how closely the estimated predictions match the objectively correct predictions. We find that all methods do reasonably well when fundamentals are explosive and all do badly when fundamentals are merely highly volatile.


Currency crises are difficult to predict. It could be that we are choosing the wrong variables or using the wrong models or adopting measurement techniques not up to the task. We set up a Monte Carlo experiment designed to evaluate the measurement techniques. In our study, the methods are given the right fundamentals and the right models and are evaluated on how closely the estimated predictions match the objectively correct predictions. We find that all methods do reasonably well when fundamentals are explosive and all do badly when fundamentals are merely highly volatile.

I. Introduction

Developing techniques to assess the vulnerability of fixed exchange rates and predict currency crises has been a research program at least since the early 1980s. A few years earlier, the first theoretical models of speculative attacks on asset price-fixing schemes were developed, prompting researchers to think about crises as rational events.2 Soon thereafter, important countries and regions experienced major currency crises. The crises in Europe (1991–92) were unforeseen for the most part, affecting countries that were thought to be immune. New theoretical models were developed stressing that attacks on fixed exchange rates could be self-fulfilling.3 Later crises in Latin America (e.g., Mexico 1994–95, Brazil 1999, Argentina 2001), Asia (e.g., Thailand, South Korea, Indonesia 1997–98), and Russia 1998 raised new issues. Some of these currency crises were accompanied by banking and sovereign debt crises. All increased the demand by policy makers and the financial industry for methods to predict—or at least better understand—crises.

The empirical literature on predicting currency crises has taken several directions. One branch has explored structural models. Blanco and Garber (1986) were the first to apply the speculative attack model to a country experience. They produced time-series estimates of the one-quarter-ahead probability of devaluation leading up to the 1976 and 1982 Mexican devaluations. Cumby and Van Wijnbergen (1989) used a structural model to estimate monthly collapse probabilities leading up to the end of Argentina’s crawling peg in 1981. Goldberg (1994) used a structural model to estimate devaluation probabilities for the Mexican peso over the period 1980–86. These studies provide insights about specific currency-crisis episodes and the merits of structural models.

A second branch has used panel data and discrete-variable techniques to predict crisis events in a sample of countries. Much of this literature constructs a foreign-exchange market pressure index, defined as a weighted sum of percentage changes in nominal exchange rates, international reserves and (sometimes) interest rates, and defines a currency crisis as occurring when the pressure index exceeds some threshold. Eichengreen, Rose and Wyplosz (1995) first adopted this approach in their logit analysis of exchange market crises in twenty OECD countries over the 1959–93 period.

Other discrete choice models have not relied on the pressure index. For example, Frankel and Rose (1996) defined a currency crisis as occurring when a country’s currency depreciates at least 25 percent against the U.S. dollar and exceeds any depreciation in the previous year by at least 10 percent. With that crisis definition, they ran a probit on a panel of annual data for over 100 developing countries from 1971 through 1992 in order to characterize large currency depreciations.

A third branch of empirical currency crisis models has relied on signaling methods. In these models, individual variables, such as the debt-to-GDP ratio or the real exchange rate, signal that a country is potentially in crisis when they exceed a certain threshold. The threshold is adjusted to balance type I errors—that the model fails to predict crises that take place—and type II errors —that the model predicts crises that never occur. Kaminsky, Lizondo, and Reinhart (1998) used the signaling model to evaluate the usefulness of several variables in signaling an impending currency crisis. Kaminsky and Reinhart (1999) adopted a signaling approach to examine the behavior of sixteen indicators leading up to currency crises, banking crises and twin crises in twenty countries over the period 1970–95. In both these studies, a currency crisis occurs when a weighted average of exchange-rate changes and international reserve changes exceeds a certain threshold.

Presently we leave aside the signaling approach. Since we generate the crises we study, we know the correct fundamentals and how they feed into these crises. So we need not study the effects of a variety of signaling variables. Our intent is to extend the work of Flood and Marion (1999) that provided perspectives on theoretical and empirical work on currency crises over a decade ago. They were skeptical of non-structural methods used to predict currency crises, but they did not conduct a careful evaluation of those methods nor compare them to a structural approach. The present paper aims to do so. It presents a Monte Carlo study of three methods that might be used to predict currency crises. The methods are:

  1. Structural - the method used first by Blanco and Garber (BG) (1986).

  2. Logit - the method pioneered by Eichengreen, Rose and Wyplosz (ERW) (1995) and Frankel and Rose (1996).

  3. OLS - the method of ordinary least squares.

Our presentation proceeds as follows. We lay out the speculative attack model developed by Krugman (1979) and Flood and Garber (1984), which we call the KFG model. We use the KFG model, parameterized by the Mexican experience, to generate data for the Monte Carlo work. Our objective is to study how well standard econometric methods recover crisis probabilities in this well-known crisis model. Whether this model is true with respect to real-world data is not relevant. We use the KFG model to generate many “virtual” currency crises. Along with other data, the model produces objectively-correct one-period-ahead probabilities of the currency crises. These probabilities are the period-by-period “actual” probabilities of having a crisis. We then estimate crisis probabilities from each of the three empirical approaches, structural, logit, and OLS. We examine how closely the estimated probabilities from these three methods match the actual probabilities.

Our results suggest that popular methods of predicting currency crises all do reasonably well, with some variation, when market fundamentals are explosive. When the fundamentals are merely highly variable—but not explosive—the logit method does quite a lot worse than the others in one important dimension. All of the methods, however, do quite badly when fundamentals are not explosive—hardly better than random guessing.

The rest of the paper is organized as follows. In Section II, we lay out the KFG model that we use to generate currency-crisis episodes. In Section III, we describe the process that generates data from the simulations of crisis episodes. In Section IV, we estimate crisis probabilities using the three empirical methods. Section V analyzes how well the estimated probabilities of crisis from these methods match the actual probabilities and suggests possible reasons for the disappointing formal results. Section VI concludes.

II. A Model of Currency Crises

Our crises are generated by the KFG model, as adapted by Blanco and Garber, BG, (1986) to accommodate discrete devaluations. The model produces closed-form solutions for the key economic variables and produces actual probabilities of a currency crisis. The model is well known, but we describe it and present example closed-form solutions to provide intuition about the data we will generate.

In the model, the monetary authority fixes the exchange rate initially by offering to buy or sell international reserves at the fixed rate. In the background is some higher-priority policy that makes the fixed rate unsustainable. Since government’s commitment to the exchange rate is limited, the fixed rate will be abandoned eventually in a foreign-exchange crisis when international reserves reach their lower bound. The KFG model is about determining the probability distribution for the timing of the crisis.

The action is in the money market:


where m(t) is the log high-power domestic money supply at time t, p(t) is the log domestic price level at t, i(t) is the domestic interest rate at t, y(t) is the log domestic output at t, w(t) is a money demand shock at t, and α > 0, γ > 0 and δ><0 are fixed parameters.

We also impose Uncovered Interest Parity,


where i*(t) is the foreign-currency interest rate, s(t) the log exchange rate quoted as the domestic-currency price of foreign exchange and Et is the expectations operator conditional on full information from time t.

We do not impose Purchasing Power Parity. Instead we set:


where p*(t) is the log foreign price level and v(t) is a shock.

Log-linearized high-power money is made up of log domestic credit and log international reserves:


where d(t) is log domestic credit, r(t) is log international reserves and m¯ and ϕ are linearization constants.

Substituting (2)-(4) into (1) and rearranging terms gives:




is the market fundamental. We set the lower bound on reserves at r(t) = 0.4

The market fundamental is an exogenous forcing variable. For clarity, in this section, we assume it follows a random walk with drift μ and a white noise random shock u(t):


The crisis occurs when foreign exchange reserves are exhausted. At that moment, we assume, for now, that the exchange rate is allowed to float freely. Later we let the monetary authority devalue the domestic currency when reserves run out.

Define s˜(t) to be the shadow exchange rate, the exchange rate that would prevail if reserves were at the lower bound, r(t) = 0, and the exchange rate were allowed to float freely. Under these conditions,


where λ0 = αμ and λ1 = 1. When the shadow rate is less than the fixed rate, speculators do not attack the fixed rate. A collapse would give them capital losses. They attack when the shadow rate equals or exceeds the fixed rate. Thus the fixed exchange rate is abandoned when s˜(t)s¯, where s¯ is the fixed rate.

The probability of a crisis next period is therefore the probability that the shadow rate equals or exceeds the fixed rate:


The value of z(t) where the shadow rate equals the fixed rate is the critical value z¯. This critical value fulfills, λ0+λ1z¯=s¯, so z¯=(s¯λ0)/λ1.

The probability of a crisis next period depends on the state this period. Given the assumed forcing process, the state is summarized by the level of the forcing variable z(t). We adopt the notation:




where prob(z(t)) is the probability of a crisis at time t+1 based on the state at time t, z(t).

In the example, suppose u has a uniform distribution with upper bound σ and lower bound -σ. The pdf is rectangular with height 1/2σ and base 2σ. Then u ~ uniform(0, σ2/3).

Figure 1 illustrates the probability density of u(t) and the function relating z(t) to the probability of a crisis. Near the origin, there is no shock big enough to push next period’s z above the critical value z¯. Consequently, the probability of a crisis is zero. As we move to higher values of z, it is possible for a shock to push next period’s z above the critical value. Hence there is a positive probability of a currency crisis. The complete description of prob(z(t)) is:

Figure 1.
Figure 1.

Probability of a Crisis – Uniform Distribution of Shock

Citation: IMF Working Papers 2010, 227; 10.5089/9781455208920.001.A001

The function prob(z(t)) is flat at zero below z(t)=z¯μσ at which point the function rises linearly with z(t) at the rate 1/2σ. At z(t)=z¯, if (μ+σ)/(2σ) < 1, the function jumps from (μ+σ)/(2σ) to unity. Notice that for big σ or small μ, the probability at z¯ approaches 0.5.

We have gone through this example to illustrate a point - the probability function may jump at the point of the crisis. When we get to our empirical treatment of data from generated crises, we should not expect the probability of crisis to go smoothly to unity.

III. Data Generation

We use money-market parameters estimated by BG to generate hypothetical data on exchange rates, reserves and market fundamentals. The BG money-market parameters we use are:

α = 1.310 quarterly interest-rate semi-elasticity of money demand

γ = 1.196 income elasticity of money demand.

The money-supply log linearization constants, m¯ and ϕ, are taken directly from Mexican data.

The market fundamental is specified in equation (6). The Mexican version of this variable is depicted in Figure 2 for the period 1974.Q4-2008.Q4. Recall that there were two “crisis periods” in Mexico, the first in the late 1970s and early 1980s as studied by BG and Goldberg, the second in 1994–95. The two periods “look different” in terms of z(t). The early period had a prolonged and dramatic run up in the fundamental. The latter period variable had a sudden shock to the fundamental not anticipated in earlier activity. From our inspection of Figure 2, we break the z(t) data into two experiences—the first 1974:Q4-1986:Q4, the second 1990:Q1-2008:Q4.

Figure 2.
Figure 2.

The z(t) Value for Mexico

Citation: IMF Working Papers 2010, 227; 10.5089/9781455208920.001.A001

We used the same functional form to characterize the z(t) data from the two periods,


Our estimates of this process are:

article image

We conduct two separate sets of simulations, one with parameters and residuals from the early period and another with parameters and residuals from the later period. The distribution of the residuals from each period is portrayed as a histogram in Figures 3a and 3b. The estimated ε^(t) from equation (11) are next put into an imaginary “urn”. Each time we generate a z, we draw an ε^ from the urn with replacement. We generated the z by taking a “start value,” z(0), from the Mexican data. For each estimation, we generate 1,000 episodes.5 Each episode has 100 observations. We reference the generated fundamentals as

Figure 3a.
Figure 3a.

Histogram of residuals z(t) process Early Mexican Data

Citation: IMF Working Papers 2010, 227; 10.5089/9781455208920.001.A001

Figure 3b.
Figure 3b.

Histogram of residuals z(t) process Later Mexican Data

Citation: IMF Working Papers 2010, 227; 10.5089/9781455208920.001.A001

The other data series are built from the z(j,t).6 The fixed exchange rate is specific to each episode and is set so that the shadow rate does not exceed the fixed rate until about half way through the sample.7

Devaluation Crises

We also consider the government policy in which a crisis results in a devaluation rather than a switch to a flexible exchange rate. For this policy choice, the authority devalues the currency when the fixed rate is unsustainable, i.e., when s˜(t)s¯. The devaluation size is determined by a rule. We investigate two rules:

  1. A state-contingent rule similar to the one estimated in BG:8
  2. A devaluation of 50 percent not conditional on the state.

IV. Estimating Crisis Probabilities

In this section we estimate currency-crisis probabilities in three ways. The goal is to compare the estimates of step-ahead crisis probabilities produced by each of these empirical methods with the actual probabilities calculated as above.

Method 1. Structural method

The structural method estimates a money-market equation and z(t) process for each episode, where each episode has 100 (quarterly) observations. The estimation uses the data generated by the “true” model described in Section II. Estimating the money market parameters for each episode allows construction of an episode-specific estimated fundamental. We then estimate that fundamental’s time-series process and residuals. Those residuals are used to compute one-step-ahead currency-crisis probabilities.9 These probabilities will differ systematically from the actual crisis probabilities to the extent that the estimated money-market and the estimated fundamentals-process parameters do not match their actual values. The details of our estimation procedure are available from the authors on request.

Method 2: Logit

The logit method was introduced to look across a wide range of variables that might predict currency crises. The only predictors we study are the z(t) and functions of z(t) because we know that nothing else is relevant.

Since logit is a method used in discrete choice problems, the econometrician must define a currency crisis in a manner that produces a binary variable equal to 1 if there is a crisis and 0 otherwise. Moreover, since the conditional variance of the model-determined exchange rate jumps up at the time of the crisis, it is reasonable to construct the binary variable from a variance-based measure of exchange-market pressure.10 We study three such measures:11


The notation SD12(q) says to compute the backward-looking 12-period rolling standard deviation of the variable q. We compute this measure for three q variables, |Δs(t)| absolute log change in the exchange rate, |Δs(t)|+|(1−φr(t)|, the Girton-Roper pressure variable, which is the sum of the absolute change in the log exchange rate plus the absolute change in log reserves multiplied by the reserves share of the high-power money, and |Δs(t)|+κ*|Δ(1−φr(t)|, the Girton-Roper measure with the weight κ=(Var(|Δs|)/Var(|Δ(1φ)r|)) inserted to make each component of the measure have equal variance.

A currency crisis occurs when:12

q>sample mean(q)+[1.5*SD12(q)].(13)

In an episode, before the crisis occurs, we code the binary variables as “0”. When the crisis occurs, we code the binary variable as “1”. All observations of the binary variable after the crisis until the end of the episode at time T are coded “1” also. 13

Unlike in Section II, we have no closed-form solution for prob(z(t)) to guide our selection of the functional form relating z(t) to the independent variables. We know that the independent variables must be dated at least one period prior to the dependent variables. We know also that the independent variables have been first-differenced so we include lagged z(t) values. We followed the lead suggested in the theory section and tried various second-order permutations of the z process, settling on z(t−1), Δz(t−1), z(t−1)2, Δz(t−1)2 as the best-performing independent variables. Performance is measured in terms of the correlation of the logit-produced probabilities with the actual probabilities.

Method 3. OLS method

The OLS method uses an ordinary least squares regression similar to the logit, but with two differences— the dependent variable is not limited by the econometrician and the independent variables enter directly instead of through the logistic function. The three OLS regressions we study are:


with g(t) = Δs(t), |Δs(t)|+|(1−φr(t)|, and |Δs(t)|+κ*|(1−φr(t)|. The g(t) variable in the OLS regression is not converted to a binary variable.

We estimate g^(t), the fitted value of (14). The OLS regression also gives a vector of ε^ residuals. We use the same crisis definition in the OLS method we used for the logit method. Our OLS crisis probability is:

prob(ε^>sample mean(g)+1.5*SD12(g)g^(t))

In words, to find the probability of crisis next period from the OLS regression, we start at this period’s fitted value of the regression and add the full ordered vector of OLS residuals forming a vector of fitted plus “possible” residuals. We form the probability of crisis by counting the elements in that vector above the crisis cutoff definition and dividing by the number of elements in the constructed vector.

V. Accuracy of the Estimated Probabilities

Estimated probabilities of a currency crisis are compared with the actual crisis probabilities in three ways: (1) correlation, (2) mean absolute errors, and (3) OLS regression of actual probabilities on estimated probabilities. As appropriate, information on the medians is included. All statistics for structural estimation characterize the distribution of estimates for 1000 episodes. For OLS and logit, we estimate the statistics also as a panel, increasing thereby the sample sizes by a factor of 1,000.14

Tables 1 and 2 present correlation, absolute error and regression statistics characterizing the distribution of pre-crisis estimated vs. actual crisis probabilities for the periods 1974:Q4– 1986:Q4 and 1990:Q1–2008:Q4, respectively. Each of the Tables consists of four panels, which in Panels A–C differ in terms of the assumed government policy response at the time of the crisis. In Panel A the policy response is the Blanco and Garber devaluation rule. In Panel B it is a 50 percent devaluation. In Panel C it is a switch from a fixed exchange rate to a freely floating rate. In Panels A–C the dependent variable in the OLS and logit regressions is the absolute value of the exchange rate change. In Panel D of the tables we report the results for the Blanco and Garber devaluation, but with a Girton and Roper-style exchange-market pressure index as the dependent variable for the OLS and logit regressions.

Table 1.

Results for 1974:Q4-1986:Q4

article image
article image
article image
article image
article image
article image
article image
article image
Table 2.

Results for 1990:Q1-2008:Q4

article image
article image
article image
article image
article image
article image
article image
article image

From a quick inspection of the Tables, it clear that the panels in each Table differ little from each other. The Tables themselves, however, differ quite a lot.

Consider first Panel 1.A in Table 1, where the policy response to the crisis is currency devaluation following the Blanco and Garber–style devaluation rule. The first column of the panel lists the estimation method. The next group of three columns summarizes the distribution (mean, standard deviation and median) of the correlation of estimated probabilities with actual probabilities for each estimation method. The second group of three columns summarizes the distribution (mean, standard deviation and median) of the absolute error, actual minus estimated, for each estimation method. The third group of three columns—listed below the others—characterizes the distribution of statistics from the following regression:


Each regression yields estimates β^0,j,β^1,j,Rj2 for j = 1,…1,000.

The results in Panel 1.A are typical of those in Table 1. The mean correlation with the actual probabilities (standard deviations in parenthesis) varies from .7711 (.1758) for the logit (panel) vs. actual to .4212 (.2275) for the OLS vs. actual. The estimators with higher correlation look better than those with lower correlation, but the standard deviations of the estimated correlations are sufficiently high that we do not make much of the correlation differences.

The average absolute probability error statistics are more mixed across methods. The logit estimates have the smallest mean absolute errors but they are not significantly smaller than the structural errors. Both the logit and structural errors are, however, quite a lot smaller than the OLS errors, all having small standard deviations compared to their means and differences in means.

The linear regression statistics give another view of the estimators. The point of the regressions is to give the estimated probabilities two free parameters that can be used to correct systematic linear differences (biases) compared to the actual probabilities We see that the β0, which should be zero, are small compared to their standard deviations. The β1, which should be unity, give the slope corrections needed to match the estimated probabilities to the actual. The coefficients range from .3517 (.1800) for OLS to .9619 (.1482) for the logit (panel). The OLS coefficient of .3517, along with a zero constant, indicates that the OLS estimate is too high and too volatile. The logit panel coefficient of .9619 with a near zero constant indicates that the logit panel estimate is a very good one with little bias and little excess sensitivity. The R2 statistics are less precisely estimated than the regression coefficients but are consistent with logit producing the best estimates.

Table 1, panels 1.B–1.D are consistent with our findings from panel 1.A. We conclude that the policy responses we have studied make little difference to the econometric methods’ ability to predict the crises. The choice of dependent variable is unimportant also.

So why does logit do so well? The method is, after all, throwing away information when it requires the econometrician to censor a perfectly good continuous dependent variable. It turns out that the logistic functional form (think of a flattened and tipped “S” trapped between zero and unity) exploits the pre-crisis nonlinearity that we see (for uniform shocks) in Figure 1. Recall also that we do not weight any errors the econometric methods make once a crisis has taken place. (Who cares?) The OLS estimators have the same regressors as the logit, but we have constrained the fitted OLS to be linear in the same fundamentals. It is surprising (to us) that the logit estimates also do better, in almost every dimension, than the structural estimates. The structural method was given the great advantage of being “told” to estimate the parameters of exactly the model we used to generate the data. It turns out that the “downfall” of the structural method is in estimating alpha, the semi-elasticity of money demand. When we give the structural method the correct alpha, so it has only to estimate the forcing-process parameters, the structural method does no worse than logit.

Our results are different in Table 2 where we parameterize the z(t) process from Mexico 1990:Q–2008:Q4 data. All the econometric methods do badly. Indeed, in Table 2 Panel 2.A we see mean absolute errors for the logit estimators over 0.5. Recall that probabilities are trapped between zero and unity—so this is really pretty bad. When we look at the “Estimation Results” part of the table we see that the logit estimates have large biases, both in the constant term and in the slope. The other panels of Table 2 give similar results. All of the estimators do poorly.

That the estimators do poorly is no surprise. They are all based on the same fundamental. In Figure 2 we see that there was no sustained and forecastable run up in the fundamental before the 1994–95 Mexican crisis. Therefore, although the actual probabilities are also fundamental based, our efforts to mimic the movements of those actual probabilities are very imprecise.

VI. Conclusion

The econometric methods we study are pretty good at predicting the currency crises we generate —some of the time. In particular, when the crisis is of the type where a well-identified fundamental drives the system to crisis, the estimated probabilities mimic the actual ones quite well. The panel logit estimator—using our post-crisis coding—distinguishes itself particularly in this environment. When the fundamental is merely highly variable, but not explosive, the logit method does quite a bit worse than the other methods in one dimension. But when the fundamental is not pushing the market into crisis, no method does well since all are fundamentals-based.


  • Blanco, Herminio and Peter Garber, 1986, “Recurrent Devaluation and Speculative Attacks on the Mexican Peso,” Journal of Political Economy Vol. 94, No. 1, pp. 14866.

    • Search Google Scholar
    • Export Citation
  • Cumby, Robert and Sweder van Wijnbergen, 1989, “Financial Policy and Speculative Runs with a Crawling peg: Argentina 1979–1981,” Journal of International Economics 27, pp. 111127.

    • Search Google Scholar
    • Export Citation
  • Eichengreen, Barry and Charles Wyplosz, 1993, “The Unstable EMS,” Brookings Papers on Economics Activity Vol. 1, pp. 51143.

  • Eichengreen, Barry, Andrew Rose, and Charles Wyplosz, 1995, “Exchange Market Mayhem: The Antecedents and Aftermath of Speculative Attacks,” Economic Policy Vol. 21, pp. 249312.

    • Search Google Scholar
    • Export Citation
  • Flood, Robert and Peter Garber, 1984, “Collapsing Exchange-Rate Regimes: Some Linear Examples,” Journal of International Economics Vol. 17, pp. 113.

    • Search Google Scholar
    • Export Citation
  • Flood, Robert and Nancy Marion, 1999, “Perspectives on the Recent Currency Crisis Literature,” International Journal of Finance and Economics Vol. 4 (1), pp, 126, reprinted in Money, Capital Mobility, and Trade: Essays in Honor of Robert A. Mundell, edited by Guillermo Calvo, Rudi Dornbusch and Maurice Obstfeld, Cambridge: MIT Press, 2001.

    • Search Google Scholar
    • Export Citation
  • Flood, Robert and Nancy Marion, 2000, “Self-Fulfilling Risk Predictions: An Application to Speculative Attacks,” Journal of International Economics, Jubilee Issue, Vol. 50, No. 1, pp, 24568.

    • Search Google Scholar
    • Export Citation
  • Frankel, Jeffrey and Andrew Rose, 1996, “Currency Crashes in Emerging Markets: An Empirical Treatment,” Journal of International Economics Vol. 41, pp. 35166.

    • Search Google Scholar
    • Export Citation
  • Girton, Lance and Don Roper, 1977, “A Monetary Model of Exchange Market Pressure Applied to Postwar Canadian Experience,” American Economic Review Vol. 67, pp. 53748.

    • Search Google Scholar
    • Export Citation
  • Goldberg, Linda, 1994, “Predicting Exchange Rate Crises: Mexico Revisited,” Journal of International Economics Vol. 36, pp. 41330.

    • Search Google Scholar
    • Export Citation
  • Krugman, Paul, 1979A Model of Balance of Payments Crises,” Journal of Money, Credit and Banking Vol. 11, pp. 31125.

  • Obstfeld, Maurice, 1984, “Balance of Payments Crises and Devaluation,” Journal of Money, Credit and Banking Vol. 16, pp. 20817.

  • Obstfeld, Maurice, 1986, “Rational and Self-Fulfilling Balance-of-Payments Crises,” American Economic Review Vol. 76, pp. 7281.

  • Obstfeld, Maurice, 1994, “The Logic of Currency Crises,” Cahiers Economiques et Monetaires, Bank of France Vol. 43, pp. 189213.

  • Salant, Steve and Dale Henderson, 1978, “Market Anticipations of Government Policies and the Price of Gold,” Journal of Political Economy Vol. 86, pp. 62748.

    • Search Google Scholar
    • Export Citation


Data used

r(t) = ln central bank reserves of foreign exchange

d(t) = ln domestic credit

p(i) = ln CPI

p*(t) = ln foreign (US) CPI

s(t) = ln domestic-currency price of US dollar

i(t) = domestic-currency US t-bill equivalent

i*(t) = US t-bill interest rate - 90-day rate.

y(t) = ln real output

The data are sampled quarterly or interpolated from annual data all from IFS.


Robert Flood, Economics Department, Notre Dame, Nancy Marion, Economics Department Dartmouth; Juan Yepez, Economics Department Notre Dame. The authors thank Stijn Claessens for comments.


The intellectual foundation for studying crises was built by Salant and Henderson (1978) in their work on gold price-fixing schemes. Krugman (1979) then developed a model of a perfect-foresight switch from a fixed to a floating exchange rate. Flood and Garber (1984) analyzed a stochastic version of an attack on a fixed exchange rate. Obstfeld (1984) applied a perfect-foresight version of these models to the case of devaluation.


For example, see Obstfeld (1986, 1994), and Wyplosz (1993) and Flood and Marion (2000).


There is nothing special about setting r(t) = 0 as the lower bound. Blanco and Garber set the reserve lower bound at an arbitrary constant and then estimate the constant. Since we are generating the data, the reserve lower bound is a free parameter.


We also ran some of our estimates with 2500 episodes and it made little difference.


In the WP version of this paper, available on Nancy Marion’s website, we give the expressions we used to generate the endogenous variables and the procedure we used to generate crisis probabilities.


We set s¯j=.5*(s˜(j,1)+s˜(j,T)). We retain only episodes in which exactly one crisis occurs.


Blanco and Garber (1986, p. 153). The coefficient 1.96 is the estimate in BG. We generated data also for 25% devaluations with no important differences.


For simplicity, we include γy(t) in the directly observable part of the fundamental z(t) using the BG estimate for γ.


When the exchange rate is fixed, its variance is zero; at the crisis, the variance increases because the rate switches to either the shadow exchange rate or a new but devalued fixed rate.


These measures are based on those used by Eichengreen, Rose and Wyplosz (1995).


We tried the weights 1, 1.5, 2, 2.5. The weight 1.5 delivered the best correlations of actual with estimated.


There is no agreed-upon post-crisis coding in the literature. We settled on coding the binary variable as “0” before the crisis and “1” at the crisis and afterwards because this method maximized the correlation of the logit-estimated crisis probabilities with the actual crisis probabilities when we estimated the z(t) process for 1974Q4–2008Q4.


In the current report, the panel estimates are done once, so the standard errors do NOT come from an empirical distribution.

A Perspectiveon Predicting Currency Crises
Author: Mr. Robert P Flood, Juan Yepez, and Ms. Nancy P. Marion