Interview with Steven Phillips: Study examines record of program projections: how clear is the crystal ball?

Forecasts attract a lot of attention, particularly when they go wrong. The IMF’s forecasting record has received its share of scrutiny, much of it devoted to forecasts published in the World Economic Outlook. A new IMF Working Paper, coauthored by Steven Phillips, Senior Economist in the IMF’s Western Hemisphere Department, and Alberto Musso, Researcher at the European University Institute (based on work undertaken when both were in the IMF’s Policy Development and Review Department), looks instead at the track record of projections made as part of IMF-supported programs. Phillips speaks with Prakash Loungani about their findings.

Abstract

Forecasts attract a lot of attention, particularly when they go wrong. The IMF’s forecasting record has received its share of scrutiny, much of it devoted to forecasts published in the World Economic Outlook. A new IMF Working Paper, coauthored by Steven Phillips, Senior Economist in the IMF’s Western Hemisphere Department, and Alberto Musso, Researcher at the European University Institute (based on work undertaken when both were in the IMF’s Policy Development and Review Department), looks instead at the track record of projections made as part of IMF-supported programs. Phillips speaks with Prakash Loungani about their findings.

Loungani: How did you get interested in seeing whether IMF projections were any good?

Phillips: Around the time of the Asian crisis, there was much comment on the gap between original projections for the programs there and what came to pass. Alberto and I wanted to see what the track record was for recent program projections in general. We studied projections in some 70 IMF-supported programs—both Stand-By and Extended Arrangements.

Loungani: When you look at a forecasting record, what makes you say, “that’s a good record”?

Phillips: There are several dimensions to the question, gauged by a standard set of tests. One test looks for a pattern of bias—do the forecasts tend to lean in one direction or the other; are they too optimistic or pessimistic in general? Another test concerns efficiency. If a review of past forecasting shows that errors were not just random but could have been predicted, that suggests it’s possible to do better. Those forecasts are not efficient. And the most obvious test, of course, is accuracy: how close was the projection to the outcome, in comparison to alternative forecasts?

We applied these standard forecast evaluation techniques to projections associated with IMF-supported programs, though we recognize that program projections may not be intended to be pure forecasts. A pure forecast would aim, to put it a bit technically, at the unconditional mean of all possible outcomes. But program projections may be the outcome of negotiations with policymakers in the program country, and they may be conditional on assumptions about the policies pursued during the program. Our paper discusses other problems in treating program projections as forecasts—the bottom line is a call for caution in interpreting our findings.

Loungani: Which variables did you use?

Phillips: We went with a handful of variables central to any program: growth, inflation, and three different balance of payments concepts. These influence the projections for many other variables in the program. Projections of tax revenues, for example, feed off the nominal income forecasts—the growth and inflation forecasts.

Loungani: How did we do? A Heritage Foundation study claimed our growth forecasts for the World Economic Outlook tended to be too optimistic.

Phillips: Well, we didn’t find any such bias in the program projections. There wasn’t a tendency to be too optimistic or too pessimistic. And we weren’t able to predict projection errors; the projections seemed efficient. The growth projections also passed our statistical tests of accuracy.

Loungani: That’s surprising; I would have expected a bit of optimism in our growth projections.

Phillips: That seems to be a common perception; maybe more attention is given to cases where growth falls short. We did find optimism in growth forecasts for “big” programs—that is, programs for larger countries or programs in which the IMF money involved was atypically large. These also tended to be programs in emerging markets, countries that had access to foreign private capital. Perhaps, in these cases, the immediate cause of the crisis tended to be a capital account problem.

Loungani: In these countries, was growth dependent on whether capital stayed in? Was the program then predicated on confidence being restored?

Phillips: Our study doesn’t get into why some projections go wrong, but, as you suggested, in capital account crises two quite different outcomes may both be plausible. Just splitting the difference might not make sense; you have to pick one scenario or the other. And in some cases, you end up making the wrong choice.

Loungani: Let’s move on to inflation.

Phillips: Here, the news wasn’t as good: inflation outcomes exceeded projections. Inflation may have fallen, but programs tended to be optimistic about how quickly this would happen. This kind of result has been published before, though maybe our tests were a bit more rigorous. But here again, it proved important to look at subsets of our sample. We found the bias was coming from projections made for countries with unusually high inflation before the program started. We couldn’t find significant bias in other programs.

Loungani: The IMF must have done well on external sector projections.

Phillips: Well, for both the current and capital accounts of the balance of payments, we didn’t find any bias. But accuracy was weak. These balances may be fundamentally difficult to predict. I should mention that we also looked at the directional accuracy of IMF projections. With one exception, we found that program projections were good at predicting which way things would move. The one exception was capital account projections.

Loungani: Is this because, as we discussed earlier, in capital account crises two quite different outcomes may be plausible, and the scenario you pick may be the wrong one?

Phillips: It’s possible. By the way, in practice, program projections are revised or updated quite often, so switching scenarios, if necessary, is always possible. But our study only looked at initial projections—that is, the projections made when the programs were first approved by the IMF’s Executive Board.

Loungani: Are you worried critics will seize upon these results and say, “there goes the IMF again, making errors”?

Phillips: We found just what a person would expect—a mixed bag of results—some favorable, some pointing to areas for improvement. If people want to focus only on the latter, they will. And they won’t need our results, since the kinds of projections we studied are routinely made public. What would be constructive is if people intrigued by our findings try to build on them and test among competing explanations of why we got the results we did. Really, a useful interpretation of our results requires that.

Loungani: A last word?

Phillips: In the end, what you really want to know is how well IMF-supported programs work. In other words, how do program outcomes compare with what would have happened in the absence of a program or with the implementation of a different program? Of course, we haven’t answered that. Our study just compares program outcomes with a set of numbers called projections. Still, I hope that some of the stylized facts we’ve found may ultimately be useful, at least in providing clues that can be used to fine-tune program design.

Copies of IMF Working Paper 01/45, Comparing Projections and Outcomes of IMF-Supported Programs, by Alberto Musso and Steven Phillips, are available for $10.00 from IMF Publication Services (see page 180 for ordering details). Working Papers are also posted on the IMF’s website (www.imf.org).

IMF Survey, Volume 30, Issue 10
Author: International Monetary Fund. External Relations Dept.