Berger, Allen N., and Sally M. Davies, 1998, “The Information Content of Bank Examinations,” Journal of Financial Services Research 14, pp. 117-144.
Berger, Allen N., and Sally M. Davies, and Mark J. Flannery, 1998, “Comparing Market and Supervisory Assessments of Bank Performance: Who Knows What When?,” Finance and Economics Discussion Paper 98/32, Board of Governors of the Federal Reserve System.
Bhattacharya, Utpal, Hazem Daouk, Brian Jorgenson, and Carl-Heinrich Kehr, 1998, “When is an Event not an Event: The Curious Case of an Emerging Market,” forthcoming inJournal of Financial Economics.
Billett, Matthew T., Jon A. Garfinkel, and Edward S. O’Neal, 1998,“The Cost of Market Versus Regulatory Discipline in Banking,” Journal of Financial Economics 48, pp. 333-358.
Crabbe, Leland, and Mitchell A. Post, 1994, “The Effect of a Rating Downgrade on Outstanding Commercial Paper,” Journal of Finance 49, pp. 39-56.
Dichev, Ilia D., and Joseph D. Piotroski, 1998, “The Long-run Stock Returns Following Bond Ratings Changes,” unpublished paper, University of Michigan.
Flannery, Mark J. and Sorin M. Sorescu, 1996, “Evidence of Bank Market Discipline in Subordinated Debenture Yields: 1983-1991,” Journal of Finance 51, pp. 1347-1377.
Glascock, John L., Wallace N. Davidson, III, and Glenn V. Henderson, Jr., 1987, “Announcement Effects of Moody’s Bond Rating Changes on Equity Returns,” Quarterly Journal of Business and Economics 26, pp. 67-78.
Goh, Jeremy C., and Louis H. Ederington, 1993, “Is a Bond Rating Downgrade Bad News, Good News, or No News for Stockholders?,” Journal of Finance 48, pp. 2001-2008.
Hand, John R.M., Robert W. Holthausen, and Richard W. Leftwich, 1992, “The Effect of Bond Rating Agency Announcements on Bond and Stock Prices,” Journal of Finance 47, pp. 733-752.
Hannan, Timothy H., and Gerald A. Hanweck, 1988, “Bank Insolvency Risk and the Market for Large Certificates of Deposit,” Journal of Money Credit and Banking 20, pp. 203-211.
Holthausen, Robert W., and Richard W. Leftwich, 1986, “The Effect of Bond Rating Changes on Common Stock Prices,” Journal of Financial Economics 17, pp. 57-89.
Martinez Peria, Maria Soledad, and Sergio Schmukler, 1999, “Do Depositors Punish Banks?,” Policy Research Working Paper 2058 (Washington D.C.:World Bank).
Pinches, George E. and J. Clay Singleton, 1978, “The Adjustment of Stock Prices to Bond Rating Changes,” Journal of Finance 33, pp. 29-44.
Schweitzer, Robert, Samuel H. Szewczyk, and Raj Varma, 1992, “Bond Rating Agencies and their Role in Bank Market Discipline,” Journal of Financial Services Research 6, pp. 249-263.
Simons, Katerina, and Stephen Cross, 1991, “Do Capital Markets Predict Problems in Large Commercial Banks?,” New England Economic Review (May/June), pp. 51-56.
Research Department, International Monetary Fund, and Société Générale, Paris, respectively. We are grateful to Torbjörn Becker, Donald Mathieson and other colleagues in the Emerging Markets Studies Division for comments on an earlier draft. The views expressed in this paper are those of the authors and do not necessarily reflect those of their employers.
See, e.g., Glascock, Davidson and Henderson (1987), Goh and Ederington (1993), and Hand, Holthausen and Leftwich (1992). It should be noted, however, that several early event studies, typically using monthly data failed to find any announcement period impact on stock prices: for example, Pinches and Singleton (1978) attributed their finding of this result to the stock market having already reflected the information in rating changes over the previous 15-18 months. However, Holthausen and Leftwich (1986) who find evidence of an announcement effect with daily data, contrast their finding with the earlier studies, noting that suggest that daily data permit more powerful tests than monthly data and that the ratings agencies may have improved their performance over time.
In addition, a recent study by Dichev and Piotroski (1998) of long-term post- announcement stock performance suggests that post-announcement abnormal returns may also be far larger than announcement-period abnormal returns: those authors find that the stocks of companies whose debt is upgraded outperform downgraded stocks for at least a year after the ratings change, with economically and statistically significant return differentials of 10 percent or more. This return differential is due largely to the return performance of small, low-rated firms, presumably reflecting information problems with these firms.
Billett at al. (1998) show that banks increase their reliance on insured funding around downgrades, which they interpret as a shift towards (less demanding) regulatory monitoring and away from (more risk sensitive) market monitoring at times of increasing risk. Crabbe and Post (1994) also document a related shift in bank funding around downgrades. A number of studies, including Hannan and Hanweck (1988), and Flannery and Sorescu (1996), show that the interest rates paid by banks on uninsured deposits contain a significant risk premium, thus implying that there is a role for market discipline. Further, studies of depositor behavior—including a study by Martinez Peria and Schmukler (1999) of banks in Argentina, Chile and Mexico—frequently find that depositors punish risky banks by withdrawing deposits.
The sources for these ratings changes were Fitch IBCA’s “CreditDisc”, Moody’s “Global Credit Research” disc, and Standard & Poor’s “Credit Analysis Reference Disc.” Some supplementary ratings information was also obtained from Bloomberg Financial Markets, L.P.
There were actually 16 clean upgrades, but estimation-period abnormal returns for one event (an upgrade of Banespa, a Brazilian bank, in 1990) were so much more volatile than the rest of the sample of upgrades and downgrades that it was excluded to provide smaller standard errors and improve the power of tests. There was no attempt to exclude data based on announcement window data, as this would raise more serious questions about selection bias.
We do not examine the 5 upgrades that meet the “near-clean” criterion as the sample is too small for any robust inference.
Among all the clean and contaminated ratings changes, only 8 appear to be for banks that had (as of late 1998) issued ADRs. Due to this small number, it was not possible to examine any hypotheses about differences in the behavior of different classes of bank investors.
See, for example, MacKinlay (1997) for a recent survey article on event study methodology. In preliminary work, we also used a joint time-series cross-section system approach (i.e., in calendar-time rather than event-time) but the results were no more promising than those reported here.
We tested for first-order autocorrelation and first-order ARCH effects in each market-adjusted returns series and found only infrequent evidence of these, so the assumption that the test statistics approach their respective asymptotic distributions may be a reasonable one. The assumption that individual abnormal returns are normally distributed is less supported by the data, but the grouping of events into portfolios much reduces any problems from nonnormality. Finally, the events being examined are reasonably well distributed over time with relatively little clustering—at least in the clean samples—so there is little need to correct for this.
If all ratings upgrades (downgrades) had the same effect at the same time on returns or occurred in response to news that had the same impact on returns, one would expect to see a difference around ratings announcements in the level of returns but not in the cross-sectional standard deviation, which is shown in Figure 1. Given that these conditions are unlikely to hold, we would expect to see an increase in cross-sectional dispersion in returns around upgrades and downgrades if there is some linkage—in either direction—between ratings changes and stock market returns, albeit a smaller increase in dispersion than if all ratings changes—upgrades and downgrades—were lumped together without regard to direction. The latter case would be similar to the work of Bhattacharya et al (1998) who test for increased return volatility around corporate news announcements, without regard to whether the news should have a positive or negative effect on stock returns.
The percentage return data cited here and elsewhere are log-differenced returns rather than exact percentage returns: the difference is minor, even for the cumulative returns in Figure 2.
Given that we cannot be sure that our data sources allowed us to capture all ratings changes that actually occurred, we cannot rule out the possibility that some of these supposedly “clean” downgrades were actually preceded by other downgrades in weeks -35 to -1.
When the sample is divided depending on whether the downgrade occurred before or after the onset of the crisis, the smaller samples yield results that are even more difficult to interpret, but the conclusion in each case is that cumulative abnormal returns over the 6-week announcement window are in each case very little different to zero.
See Billett et al. (1998) for an example of a study which uses a sample of all banks and various control variables to estimate the probability that a ratings change was expected for a particular bank that did experience a ratings change. A number of authors also split their samples into uncontaminated and contaminated subsamples based on whether there was other news about the company in question at the time of the ratings change.
One potentially important variable that was not available was the reason for the ratings change. Another variable which is not included in the regressions in Table 2 is the preannouncement runup in returns (i.e., the cumulative abnormal return for each stock in the three preannouncement weeks). This variable will have an expected positive sign if the degree of anticipation of ratings changes is the same for all banks. If this assumption is not correct, the expected sign of this variable is not clear. In results not reported here, the variable was never significant and did not substantially affect the parameter estimates for the other variables.
For some variables, most notably for the credit watch variable, we suspect that our coverage is less than complete, thus introducing the possibility of some measurement error into the explanatory variables.
The lack of success with this regression is not, however, without precedent: the regression in Schweitzer et al (1992) to explain the magnitude of abnormal stock returns following bank downgrades contains three explanatory variables, with only one showing (borderline) significance.
This assumes that bank equityholders are not perceived to be protected by explicit or implicit government guarantees.