Copycats and Common Swings: The Impact of the Use of Forecasts in Information Sets

This paper presents evidence, using data from Consensus Forecasts, that there is an "attraction" to conform to the mean forecasts; in other words, views expressed by other forecasters in the previous period influence individuals’ current forecast. The paper then discusses and provides further evidence on two important implications of this finding. The first is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value that is not the “right#x201D; target. Second, since the forecasts are not independent, the common practice of using the standard deviation from the forecasts’ distribution, as if they were standard errors of the estimated mean, is not warranted.

Abstract

This paper presents evidence, using data from Consensus Forecasts, that there is an "attraction" to conform to the mean forecasts; in other words, views expressed by other forecasters in the previous period influence individuals’ current forecast. The paper then discusses and provides further evidence on two important implications of this finding. The first is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value that is not the “right#x201D; target. Second, since the forecasts are not independent, the common practice of using the standard deviation from the forecasts’ distribution, as if they were standard errors of the estimated mean, is not warranted.

The standard theory of time series forecasting involves a variety of components including the choice of an information set, the choice of a cost function, and the evaluation of forecasts in terms of the average costs of the forecast errors. It is generally acknowledged that by including more relevant information in the information set, one should be able to produce better forecasts. To get better forecasts, however, one has to learn from previous mistakes.

The so-called macroeconomic consensus forecasts (polling various individuals or agencies to express their views and later release their figures, taking the unweighted mean as the consensus) exclude such a possibility, since a sequence of forecasts are being made of one fixed point in the future, which, each month, moves one month closer. One difficulty is that these forecasts cannot be immediately evaluated, as it will be several months before the actual value being forecast will be observed. However, the process allows for a monthly revision of a prediction by incorporating new information released in the month and by evaluating one’s o w n forecast in comparison with those made by others.

Several authors have analyzed the performance of forecasters when they act in a group and raise several issues: just to quote a few, Zarnowitz and Lambros (1987) distinguish between "consensus" in terms of the agreement among point forecasts made by several individuals on the one hand and "uncertainty" in terms of the variability of each point prediction; Spiro (1989) studies Canadian data and finds a reluctance by forecasters toward predicting large changes; conservatism and group pressure are studied by Batchelor and Dua (1992); Lamont (1995) leans towards an agency explanation in wondering whether the released forecasts reflect true expectations or are used to manipulate beliefs about the forecaster’s ability; Graham (1996) claims that a group of forecasters performs better than one, and a median forecaster outperforms a naive forecasting strategy; more recently, Laster, Bennett and Geoum (1999) develop a theoretical model in which the insertion of a bias in the forecasts arises naturally as a result of an effort by the forecasters to maximize their reward.

In this paper we concentrate on the fact that by comparing forecasts one may be lead to wonder whether his or her o w n is too different from the forecasts of others or from the consensus of the group. This could suggest that the cost function being used to select one’s forecast gives weight to h ow successful the forecaster performs relative to others rather than what is relevant to users of the forecast. The results that will be presented here suggest that forecasters do pay a great deal of attention to the output of other forecasters and, consequently, they all may produce unsatisfactory results. While the first outcome is to be expected under any model of rational information use, the second one is of major interest because it suggests that individuals m a y feel that other forecasters’ private information is more relevant and, as a result, move to the wrong target as a group. Moreover, one cannot use the distribution of individual forecasts as if it were made of independent draws.

There are substantial differences between these forecasts and the ones expressed in financial markets (e.g., Graham (1999)) in that the latter translate into asset allocation recommendations that may affect the price of the assets; reputation (and ultimate survival) of the forecaster rests with the outcome of his or her recommendations. In a macroeconomic framework, the link between these forecasts and the realized value of the variable of interest is fairly tenuous and reputation plays a toned down role. For these reasons, one of the goals of this paper is to investigate the empirical clustering exhibited by these multiperiod forecasts rather than testing a specific model of herding behavior borrowed from the finance literature.

We examine how to evaluate the multiperiod forecasts collected and published by such polling agencies, using the data from Consensus Forecast as an example. This question has many interesting aspects related to the formation of expectations about the future behavior of macroeconomic variables and to the evolution of agents’ beliefs on such behavior.

I. Copycats and Common Swings

We use a survey of forecasts, Consensus Forecasts1 focusing on four recent years (1993-96) and three major countries: the United States, United Kingdom, and Japan. For the case at hand, starting from January, each individual forecaster produces two point forecast values for a number of macroeconomic variables, one for the current year annual percentage change and the other for the following year. For the month of January, these correspond to a 12-step-ahead forecast and a 24-step-ahead forecast, respectively. The next month, each forecaster produces analogous forecasting values in February, which are 11-stepahead and 23-step-ahead forecasts, and so on. There is no guarantee that the forecasting group will have the same composition each month, since a few—not always the same—forecasters do not report to the agency. To fix ideas, note that the 24-step-ahead forecasting value for the annual change of a variable in 1993 is made in January 1992, and the 1-step-ahead forecast is made in December 1993. In January 1994, preliminary values of the 1993 growth are usually released, with further revisions occurring during a time span that varies from country to country, before the data are considered final.

Let us focus on GDP growth as an example. In Figures 1 through 3 w e show the general appearance of the time series profile of the forecasts.2 For each figure (one for a different country), four panels are reported (one for each year) related to the forecasts of real G D P growth. In each panel the suffix refers to the year, and the prefixes refer respectively to the smallest (L O W), the consensus forecast minus the group standard deviation within the month (DN), consensus forecast plus the group standard deviation within the month (UP), and the largest forecast (HIGH). Although in what follows w e question the statistical meaning of D N and UP, practitioners pay attention to them to get an impression of the dispersion and possible skewness of the forecasts. One immediate finding is that the spread of forecasts within the group decreases with the time horizon. That is, the agreement among forecasters in the group increases as w e get closer to the end date.

Figure 1.
Figure 1.

Multistep Forecasting of G D P Growth Rates United States

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: The suffixes (93, 94, 95, 96) refer to the year, L O W is the lowest, HIGH is the highest forecast in the group; DN and UP are the group mean minus, respectively, plus one group standard deviation, month by month.

The second stylized fact w e want to call the attention to is the fact that at the 12-step-ahead horizon w e often observe a relatively large shift in the mean of the forecasts. This happening in January, it may be the outcome of the release of new (preliminary and revised) figures for the previous years and, hence, the impact of new information obtained as a result of evaluation of the previous years forecast.

Figure 2.
Figure 2.

Multistep Forecasting of GDP Growth Rate United Kingdom

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: The suffixes (93, 94, 95, 96) refer to the year, LOW is the lowest, HIGH is the highest forecast in the group; DN and UP are the group mean minus, respectively, plus one group standard deviation, month by month.
Figure 3.
Figure 3.

Multistep Forecasting of G D P Growth Rate Japan

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: The suffixes (93, 94, 95, 96) refer to the year, LOW is the lowest, HIGH is the highest forecast in the group; DN and UP are the group mean minus, respectively, plus one group standard deviation, month by month.

This would mean that one substantial source of forecast revision comes from the attempt by forecasters to steer their current forecasts, learning from previous years’ mistakes (in particular the most recent one). Note that most forecasts are of annual growth rates, so that the denominator changes when a new value becomes available.3

With the exception of just this turn-of-the-year "common swing," the degree of persistence in group forecasting can be regarded as an empirical regularity. In the uncertainty of where the target will be, forecasters move together as a group. This fact has some interesting consequences. First, there seems to be little point in buying macroeconomic forecasts from many group members. In fact, even a combination of such forecasts (with positive weights on each individual forecast) does not remove the bias that exists in a single forecast. The July 12, 1992 issue of the (London) Sunday Times had an article entitled "Mavericks Win Hands Down in Forecasts Game" with a subtitle reading "Britain’s Top Economists Do Battle over Their Record," which implied that economists barred from the favored inner circle give warnings that are ignored. In the event of real changes in the macroeconomic environment, an individual forecaster might be assumed to have a relatively low probability of observing the change. Among a large group of forecasters, however, it would be likely that some fraction will detect the change. The detected group "collaboration," though, may increase the likelihood that changes in the economic environment are ignored, as the group opinion tends to dominate. In some countries this group opinion is actually the result of most forecasters adapting their forecasts to those of a leader. As noted by Granger (1996), for example, there seems to be a strong tendency for the U.K. forecasting groups to cluster around the Treasury forecast, a phenomenon that will not be analyzed here, since the timing of the consensus exercise is such that individuals know only the previous months’ forecasts and are not supposed to know what other forecasters’ opinions are at the moment of computing their forecast.

One complementary question to this view about the formation of group forecasting is whether there is any benefit to being regarded as an outlier in formulating the forecasts. In stock markets, being a contrarian may pay off. The forecaster who, on average, produces "strange" forecasts, but is occasionally right, can garner increased attention from the market participants and consumers of forecasts. The macroeconomic forecasting framework, however, is quite different from that of financial forecasts, and thus, the strategy of being alone or an outlier may not prove to be a good one.

A final interesting point is the choice of cost function made by macroeconomic forecasters. A commonly used cost function for evaluating forecasts has the (absolute or squared) deviations of forecasts from actual values as arguments. During the forecasting period, no evaluation can be made, and the cost function varies over the forecasting horizon. If the cost function employed by forecasters allocates a certain weight to the mean forecast of the group, this may cause group forecasts to move together. Traditional forecasting cost functions do not emphasize this effect. Heuristically, macroeconomic forecasts may allocate substantial weight on the forecast values that others have produced rather than placing full weights on newly arrived information. In the next section we suggest a simple model that may capture some of these empirical regularities.

II. The Evolution of Beliefs

There is a question of public news information arrival that may alter individual views on the evolution of a macroeconomic variable. The published forecasts (month by month) reflect the availability of an in-house predictor (the realization of which is unknown), an implicit judgement about the published forecasts related to the previous period, and the arrival of new information in the form of macroeconomic announcements (data releases or updates). In light of the empirical evidence outlined above, starting from an initial forecast, the way subsequent individual forecasts change is a mixture of the above elements, and it is, therefore, difficult for the econometrician to disentangle in a new forecast what is new information and what is imitation effect.

For the case at hand, what is being predicted is a variable of interest yT, that is, measured at yearly intervals,4 for which there are J published forecasts by company i, i = 1,..., N, starting from January of year T—1 to December of year r. Bringing along these time indices may be misleading when comparing different years, and hence, although there is no natural choice of notation (cf., the setup suggested by Davies and Lahiri, 1995, for three-dimensional panel data), one possibility is yiT, j T. for the published forecast by company i, i = 1,..., Nj5, for j periods to the target j = 24,...1 (i.e., the closer the forecast is to the realization, the smaller j is). Let us define the following quantities of interest:

y¯T,j=1NjΣi=1NjyT,ji

is the group average of they-periods ahead forecast; and

σT,j2=1Nj1Σi=1Nj(yT,jiy¯T,j)2

is the group variance of the j-periods ahead forecast.

As one has J = 24 sample points (at least potentially) for each company, we suggest a model for the evolution of the forecasting behavior of the single company built around the idea that:

  • there should be persistence in one’s own most recent forecast yiT, j+1,

  • there should be an imitation effect of the average belief expressed the previous period (but known only on the current month) y¯T,j+1, and

  • there should be an effect due to a desire to move closer together as the time horizon decreases which can be captured by a measure of the dispersion of the individual forecasts σT, J+ I in the previous period.6

The resulting expression is

yT,ji=α+w1iyT,j+1i+w2iyT,j+1+w3iΣT,j+1+uT,ji,(1)

for y = 23, 22,…, 1. The first coefficient, wil, measures the persistence of one’s own forecasts: the closer its value is to 1, the less likely it is, other things being equal, that the company changes its mind in subsequent forecasts. The sign of wi2 signals whether the movements of the subsequent individual forecasts are in agreement with the observed movements in the group average or not. In other words, a negative sign conveys the idea that the company tends to choose a different direction from what is taken by the group average. Finally, a negative sign for the coefficient of the dispersion variable wi3 would capture the empirical regularity that the individual forecasts tend to be less dispersed around their mean, the shorter the remaining forecasting horizon is.

The following reparameterization is helpful in measuring an additional effect related to the group mean:

yT,jiy¯T,j+1=α+w1i(yT,j+1iy¯T,j+1)+(w1i+w2i1)y¯T,j+1+w3iσT,j+1+uT,ji.(2)

In this context, the choice of forecast j periods ahead can be expected to be persistently on the same side of the sample mean (wil 0), with a negative expected sign for the coefficient of y¯T,j+1, since the assumption that the individual forecast reverses to the group mean requires both a significance of the coefficient of the group mean and an impact of the group mean towards a decrease in the distance between the current forecast and the previous period group mean.

In the sequel, we will refer to this effect as shrinking to the mean. Accordingly, we will discuss the significance of the parameter W2 for the average forecast, (which signals the presence of an imitation effect, separately from the (negative) significance of the parameter w1 + w2 - 1), which signals that movements in the average forecast actually bring about a decrease in the distance between the individual forecast and the average. The latter alone, accompanied by a high persistence in one’s own views, could actually signal a convergence of the group mean to the individual forecast. Consider as an illustration the following case: one company forecasts high growth and persists in the forecast (not reacting to the group average), the other participants slowly adjust to a high growth forecast pushing up the mean and therefore decreasing the distance from the individual forecast. The coefficient wi1 + wi2 - 1 would be negative, but the imitation behavior of that company is absent. The crucial aspect here is that the coefficient for the group average is zero.

The disturbance term uiT, j contains all innovations to the individual predictions coming from sources other than the group mean or forecast dispersion. The absence of serial correlation detected in the estimated residuals strengthens the idea that systematic deviations from the mean, which would signal "individualism," are not present in the data.

Estimation takes advantage of the (unbalanced) panel structure of the data. In order to avoid issues related to a less than regular participation in the survey, all companies that reported their forecasts for 58 periods or less (out of the 96 surveyed—4 years times 24 periods) were excluded from the sample. This leads to 1,279 sample points for the U.S. (17 companies), 752 sample points for Japan (10 companies), and 2,429 sample points for the U.K. (31 companies). For each country the model is estimated by both the Seemingly Unrelated Regression and by the Ordinary Least Squares (OLS) estimators to detect a possible contemporaneous correlation structure across the disturbances. A good specification of our model in what relates to the imitation behavior would imply that the restrictions imposed by OLS should be accepted if the relationship captures all the interaction across companies. This is indeed the case for all three countries, and thus, in what follows, we will refer to the OLS estimation results only.

The question arises naturally as to whether the estimated coefficients are constant across companies in each country or not. Here the results are quite different across countries, in that for the U.K. and Japan the null hypothesis of constancy is rejected, while for the U.S. it is not. Thus, for the first two countries, we will make reference to the estimation company by company, reporting, as a leading example, the complete details on the U.K. estimation of equation (1) in Table 1. We will briefly comment on the Japanese results and the pooled estimates for the U.S. below.

Table 1.

Full Estimation Results for the U.K. Companies

article image
Boldface iindicates a significant coefficient at the 5 percentModel (1) is the model where the current forecast is regressed on a constant, the lagged forecast, the lagged group average, and the lagged group standard deviation (see equation (1)); Model (2) is the model where the deviation of the current forecast from the lagged group average is regressed on a constant, the lagged forecast, the lagged group average, and the lagged group standard deviation (see equation (2)). A shrink effect is present in Model (2) when the lagged group average has a negative impact on the dependent variabl

The table is organized by isolating in a first group the British companies leaving the U.K. subsidiaries of foreign companies in a second group. Each company is identified by a code that does not disclose its identity, which is irrelevant for the discussion. Next to the company ID, we report the parameter estimates of the lagged forecast, the group mean, and the group standard deviation. The constant is estimated, but we do not report it as it is significant only in three cases. In boldface are the values that are significantly different from zero at the 5 percent level. As an additional piece of information, next to the group mean column we report whether the shrinking effect discussed above is present, in the form of the coefficient for the group mean in the reparameterized Model (2) being significantly negative. The final two columns report the adjusted R2 of the two estimated expressions.

Overall, the estimation results are confirming a high explanatory capability of the model, with adjusted R2 ranging from 0.71 to 0.97 (median value is 0.91). In the reparameterized model, where the dependent variable changes, the explanatory power is somewhat smaller and ranges from 0.10 to 0.89 (with a median equal to 0.58). The results show that the persistence effect of the company’s own past forecast is very strong from a statistical point of view: all coefficients are significantly different from zero, although, with a varied degree, fairly evenly distributed between 0.41 and 1.07 (the latter not being significantly different from 1.00). It is interesting to note that the five companies for which this parameter is above 0.9 have an estimated coefficient for the group mean that is not significant (in one case significantly negative). Notice that among these, a few exhibit a significant reduction of the distance from the group mean that falls in the case illustrated above. For the other cases, most coefficients for the group mean are significantly positive for 19 companies, and are not accompanied by a significant negative effect in the coefficient of the other parameterization.

Notice that the distinction between U.K. and non-U.K. companies provides the interesting indication that the latter has fewer significant coefficients for the group mean and none for the group standard deviation, signaling less of a group behavior.

For Japan, the results are similar (the details are available on request), with the explanatory power of the models (measured by the adjusted R2) ranging from 0.69 to 0.89 (the median is 0.86). The reparameterized model has lower explanatory power and a range of R¯2 between 0.09 and 0.75 (median 0.29). The degree of estimated persistence seems to be smaller than the U.K. on average (ranging from 0.43 to 0.81) and insignificant in two cases, in which the coefficient relative to the group average is significantly different from zero interpretable as a sign of strong imitation behavior. For the significantly persistent forecasts (corresponding to foreign companies) the coefficient of the group mean is not significant, while for them a negative significant impact of the standard deviation is present.

For the U.S., since the restrictions of equal coefficients cannot be rejected, the impression is that the imitation behavior is quite strong. The estimated original Model (1) for U.S. GDP growth is as follows:

yT,ji=0.38(8.36)+0.85(47.86)yT,j+1i+0.04(3.36)y¯T,j+10.20(3.74)σT,j+1,(3)

where f-statistics appear in parentheses and the adjusted R2 is equal to 0.79. The reparameterized model (2) is estimated as:

yT,jiy¯T,j+1=0.38(8.36)+0.85(47.86)(yT,j+1iy¯T,j+1)0.10(7.32)y¯T,j+10.20(3.74)σT,j+1,(4)

with an adjusted R2 equal to 0.83. We note that all coefficients are significant and that the average degree of persistence is quite high (0.85), the impact of the mean is generally lower than that found for the U.K. and Japan (0.04) and the impact of the dispersion measure is negative as implied by our model.

III. Which Target Are They Aiming At?

As mentioned above, the second fundamental question of interest is where the forecasts converge. Here we do not take a definite stance on any a prior assumptions about what the "right" target is or should be. The approach w e follow is to examine several variables surveyed among the U.S. companies.7 We report in Table 2 the range of the last month (one-period) ahead forecasts of the yearly data recorded by Consensus Forecast, as lowest recorded, highest recorded and the consensus forecast.8 W e immediately notice that there is a great deal of variety in the disagreement related to these forecasts. Not surprisingly, for some variables, such as growth rates of prices (such as the Consumer Price Index, the Producer Price Index, or wage bill) or unemployment, the consensus exercise is less spread out, given the degree of persistence in the variable, the higher timeliness at which the data is available, and the smaller uncertainty reflected by negligible, if any, data revisions. For other variables (investments and corporate profits, for example), divergence of views, especially for certain years, seems to be a reflection of the higher variability of the phenomena and the increased difficulty in measuring them. A trace of the difference between "consensus" and uncertainty put forth by Zarnowitz and Lambros (1987) can be found here. The two variables that seem to be predicted in similar ways by the participants are GDP growth and consumption growth for which the range of the forecasts is not very wide.

Table 2.

Range of Disagreement at the End of the Consensus Exercise by Year and Variable: United States

article image
The table reports the range of the last month (one-period ahead) forecasts of the yearly data recorded by Consensus Forecast. W e report the lowest, the highest, and the group average, by year and by variable.

In this outlook, whether there is convergence to an agreement or not, w e feel that the question of where the forecasts are converging is still an open one. Runkle (1998) discusses the magnitude of the difference between the initial estimate and the final revision for the official U.S. estimate of the output growth and suggests that researchers must use the data available initially when policy decisions are actually made. The importance of the difference between preliminary or revised data in performing the evaluation of the forecasting accuracy is really crucial (cf., Batchelor and Dua (1998)). A s examples of a widespread array of experiences, which vary across variables (and countries), in Figures 4 through 6 w e choose to depict the data of growth rates of GDP, Industrial Production, and Corporate Profits as they have become available through time (labeled DATA) from the date of first publication to March 1999.9

Figure 4.
Figure 4.

U.S. GDP: Data Revision and One-Step-Ahead Forecast

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: DATA refers to the growth rate data as they have become available through successive revisions, plotted against the group average one-step-ahead forecast (MEAN) and a forecast band around it (LOW and HIGH).
Figure 5.
Figure 5.

U.S. Industrial Production: Data Revision and One-Step-Ahead Forecast

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: DATA refers to the growth rate data as they have become available through successive revisions, plotted against the group average one-step-ahead forecast (MEAN) and a forecast band around it (LOW and HIGH).
Figure 6.
Figure 6.

U.S. Corporate Profits: Data Revision and One-Step-Ahead Forecast

Citation: IMF Staff Papers 2002, 002; 10.5089/9781589060975.024.A002

Note: DATA refers to the growth rate data as they have become available through successive revisions, plotted against the group average one-step-ahead forecas (MEAN) and a forecast band around it (LOW and HIGH).

For these variables, the forecast range for the last period of the poll (the one reported in Table 2) is plotted as a constant band (labeled LOW and HIGH) around the group mean forecast (labeled MEAN) against the actual data available month by month. The published values may vary a lot through time, as it is clear from these graphs, and hence, with hindsight, a benevolent evaluation could always try and argue in favor of one or the other revision as the "true" target. The fact that the "hit" or "miss" pattern also changes from one year to the next may signify that certain years are plainly more difficult to forecast than others, but this happens whether the preliminary figures are accompanied by wide successive revisions (reflecting uncertainty in the data collection process, cf. GDP in 1993 in Figure 4) or not (cf., GDP again in 1995). Sometimes the forecast range is so wide that the "true" values cannot be outside of it (e.g., corporate profits or industrial production for 1993). Other times the forecasts can be judged as being on target for the first few revisions, while later revisions can take the "true" value outside what was being forecast (e.g., GDP in 1994 and 1996, industrial production for 1996). In many instances, the consensus does miss altogether either preliminary or subsequent revised data (cf., 1995 as an example of such a year) even as a range: in view of the imitation results, which we have stressed in a previous section of this paper, this is not surprising and indicates the fact that under specific (but unforeseeable ex ante) circumstances, a widespread uncertainty about the growth rate of a variable may generate some "perverse" behavior that leads the group wildly off target.

IV. Concluding Remarks

When a forecaster is making a series of forecasts into the distant future but to a fixed date which are updated each month as new information arises, the standard theories do not apply because evaluations of the forecasts cannot be used. However, forecasters can note how far their prognostications differ from those of other forecasters. We find not only evidence of this effect but suspect that too much attention is given by forecasters to activists in the group and insufficient attention to the state of the economy. It seems clear that the fact that a group of forecasts are in full agreement should not be viewed as evidence of a particularly high quality forecast, just as the variability between forecasts cannot be used as a measure of uncertainties of the forecast.

  • Batchelor R., and P. Dua, 1992, “Conservatism and Consensus-Seeking Among Economic Forecasters,” Journal of Forecasting 11, pp.16981.

    • Search Google Scholar
    • Export Citation
  • Batchelor R., and P. Dua, 1998, “Improving Macro-Economic Forecasts: The Role of Consumer Confidence,” International Journal of Forecasting, 14, pp.7181

    • Search Google Scholar
    • Export Citation
  • Davies, A., and K. Lahiri, 1995, “A New Framework for Analyzing Survey Forecasts Using Three-Dimensional Panel Data,” Journal of Econometrics, 68, pp.20527.

    • Search Google Scholar
    • Export Citation
  • Graham, J.R., 1996, “Is a Group of Economists Better than One? Than None?” Journal of Business, 69, pp.193232.

  • Graham, J.R., 1999, “Herding among Investment Newsletters: Theory and Evidence,” The Journal of Finance, 54, pp.23768.

  • Granger, C.W.J., 1996, “Can We Improve the Perceived Quality of Economic Forecasts?” Journal of Applied Econometrics, 11, pp.45573.

    • Search Google Scholar
    • Export Citation
  • Laster, D., P. Bennett, and I.S. Geoum, 1999, “Rational Bias in Macroeconomic Forecasts,” Quarterly Journal of Economics 114, pp.293 318.

    • Search Google Scholar
    • Export Citation
  • Lamont, O., 1995 “Macroeconomic Forecasts and Microeconomic Forecasters,” NBER Working Paper 5284 (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Runkle, D.E., 1998, “Revisionist History: How Data Revisions Distort Economic Policy Research,” Federal Reserve Bank of Minneapolis Quarterly Review, 22, pp.312

    • Search Google Scholar
    • Export Citation
  • Spiro, P.S., 1989, “Improving a Group Forecast by Removing the Conservative Bias in Its Components,” International Journal of Forecasting, 5, , pp. 12731

    • Search Google Scholar
    • Export Citation
  • Zarnowitz, V., and L.A. Lambros, 1987, “Consensus and Uncertainty in Economic Prediction,” Journal of Political Economy, 95, pp.591621.

    • Search Google Scholar
    • Export Citation
*

Giampiero M.Gallo is Professor of Econometrics at the University of Firenze, Italy; Clive W.J.Granger is Professor of Economics at the University of California, San Diego; and Yongil Jeon is Assistant Professor of Economics at Central Michigan University in Mount Pleasant, Michigan. Thanks are due to Frank Diebold and to Prakash Loungani for insightful comments on the paper. W e are grateful to Deutsche Bank Research for providing the Consensus Forecast dataset used here. The U.S. macroeconomic data were collated from Economic Indicators Bulletin of the Council of Economic Advisers. The Japanese G D P growth data were kindly provided by Yuki Hirai from the Japanese Economic Planning Agency and Iichiro Uesugi. The preliminary data for the U.K. were collected by G. Colicigno from the Economic Records of H.M. Central Statistical Office. We also thank B.Paye for useful discussions. Financial support from NSF grant SER-9708615 and from the Italian MURST and CNR is appreciated.

1

Every month, a company established in the U. K., Consensus Economics Inc. (http://www.consensusecon. com/index.htm), conducts a poll among financial and economic forecasters in more than 70 countries surveying their estimates of a range of economic variables.

2

Other macroeconomic variables show similar behavior. The related figures are omitted for the sake of brevity and are available at http://weber.ucsd.edu/~yjeon or upon request.

3

Frank Diebold correctly pointed out to us that some readers may “legitimately worry about data snooping biases.” We do indeed peek at the figures and try and rationalize the behavior via our simple model. Due to the limits of the data available we are not able to perform a full-fledged analysis of the potential biases in the inference procedure.

4

GDP is available also at quarterly intervals, but the insertion of the information available at subannual intervals did not produce any fundamentally different results, as data uncertainty extends also to the seasonal adjustment procedure.

5

Recall that each month there may be a different number of firms reporting (hence Nj) the forecast for the year T.

6

Other measures were inserted such as the group variance or the highest-lowest range but lead to worse results in terms of overall explanatory power.

7

This should serve as an illustration of the points made here; similar cases can be made for the U.K. and Japan, but would not add substantial arguments.

8

The median of the group does not differ from the mean by much.

9

The values were computed from the collation of available figures on the Economic Indicators bulletin prepared by the U. S. Council of Economic Advisers. Other figures for the remaining variables are available at http://weber.ucsd.edu/~yjeon or upon request.

IMF Staff Papers, Volume 49, No. 1
Author: Mr. Robert P Flood