Information Rigidities: Comparing Average and Individual Forecasts for a Large International Panel
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

We study forecasts for real GDP growth using a large panel of individual forecasts from 36 advanced and emerging economies during 1989–2010. We show that the degree of information rigidity in average forecasts is substantially higher than that in individual forecasts. Individual level forecasts are updated quite frequently, a behavior more in line “noisy” information models (Woodford, 2002; Sims, 2003) than with the assumptions of the sticky information model (Mankiw and Reis, 2002). While there are cross-country variations in information rigidity, there is no systematic difference between advanced and emerging economies.

Abstract

We study forecasts for real GDP growth using a large panel of individual forecasts from 36 advanced and emerging economies during 1989–2010. We show that the degree of information rigidity in average forecasts is substantially higher than that in individual forecasts. Individual level forecasts are updated quite frequently, a behavior more in line “noisy” information models (Woodford, 2002; Sims, 2003) than with the assumptions of the sticky information model (Mankiw and Reis, 2002). While there are cross-country variations in information rigidity, there is no systematic difference between advanced and emerging economies.

I. Introduction

How people form expectations is a central concern in macroeconomics. Over the past decade, two main classes of theories have emerged on the formation of expectations. The first, due to Mankiw and Reis (2002), states that forecasters update their information sets infrequently because there are fixed (“menu”) costs of acquiring information. In the second, developed in Woodford (2002) and Sims (2003), forecasters continually update their information sets but receive noisy signals true state of the economy. Both theories generate information rigidities, that is, departures from full information rational expectations. In an important set of papers, Coibion and Gorodnichenko (2010, 2012) showed that canonical versions of both theories—dubbed respectively as the ‘sticky information’ and ‘imperfect information’ models—predict that average forecast errors should be correlated with the past forecast revision.

In this paper, we first show that there is an equivalent way to document information rigidities, which is to look at the correlation between the current forecast revision and the past forecast revision. Because it does not require the construction of forecast errors, our test has the advantage that the econometrician does not have to take a stand on, or to collect, various versions of ex post data (e.g. initial releases of GDP vs. final estimates). This is particularly helpful when evidence for a large number of countries is being analyzed, which is the case in this paper. We find that, for most countries, the degree of information rigidity in average forecasts using our approach is similar to that reported by Coibion and Gorodnichenko.

A second contribution of our paper is to try to distinguish between the sticky information and noisy information explanations of information rigidity. We do this by looking not just at average or consensus forecasts (i.e. forecasts aggregated across a number of forecasters) but also at individual-level forecasts. We find that at the individual-level, forecasts are updated quite frequently, which is more consistent with the noisy information class of theories than with the behavior of forecasters assumed under the sticky information model, where agents do not change their forecasts for extended periods due to menu costs.2

The broad country coverage of our study provides an opportunity also to compare the extent of smoothing in forecasts for advanced and emerging economies. We find that, for both average and individual-level forecasts, the degree of information rigidity varies across countries. However, there is heterogeneity within advanced countries as well as within emerging market economies; we do not find any systematic differences in the extent of information rigidity between the two groups of countries.

The paper is organized as follows: Section II discusses the methodology for testing for the degree of forecast smoothing using average and individual forecast data. Section III describes our data on international growth forecasts and highlights some important stylized facts. Section IV presents the empirical results. The last section concludes.

II. Methodology for Testing for Forecast Smoothing

A. Average Forecasts

The test for forecast smoothing (forecast efficiency) exploits the fact that we have a sequence of forecasts for the same event, viz., annual real GDP growth, i. e. we have a sequence of (average) forecasts F¯i,t.h for country i and target year t made at horizons h =24,23,…, 1. Furthermore, let r¯i,t,h=F¯i,t,hF¯i,t,h+k* denote the revision of the average forecast computed over k* months. We set k* = 3 throughout this paper; this value is a reasonable choice to balance a trade-off between losing too much of the high frequency dynamics (for larger values of k) against sampling too many “zero-revisions” due to the fact that some forecasters update their forecasts only quarterly (for lower values of k).

Under the null of full information rational expectations, the sequence of forecasts for one event must follow a martingale process. Nordhaus (1987) proposed a test that is based on regressing the contemporaneous revision on lagged forecast revisions:

r¯i,t,h=βi+λr¯i,t,h+k+ui,t,h(1)

where k ≥ k* has to hold to avoid moving average effects in the residuals of the regression.3 If λ = 0, forecasts are (weakly) efficient. Otherwise, forecast revisions are correlated, and the null hypothesis of forecast efficiency is rejected.

Reis (2006) shows that under sticky information the average forecast for an event xi,t is a weighted average of the lagged average forecast and the current rational expectation of the event:

F¯i,t,h=λF¯i,t,h+k+(1-λ)[xi,t+vi,t,h],(2)

where vi,t,h is the rational expectations error. It follows that

r¯i,t,h=F¯i,t,h-F¯i,t,h+k=λϒ¯i,t,h+k+(1-λ)[vi,t,h-vi,t,h+k]=λr¯i,t,h+k+ui,t,h.(3)

Thus, the regression coefficient from equation (1) translates directly into the degree of information rigidity in the sticky information framework (e.g. Mankiw and Reis, 2002, or Reis, 2006).

Likewise, also in the “noisy” information framework (e.g. Woodford, 2002, and Sims, 2003) the degree of informational rigidity can be directly inferred from the parameter estimates of equation (1). Coibion and Gorodnichenko (2012) show that under the assumption of a standard loss function agents optimally use the Kalman filter to update their forecasts in each period as

F¯i,t,h=(1-G)F¯i,t,h+k+G[xi,t+ωi,t,h],(4)

where ωi,j,h is the noise component of the information that agents have about the event xi,t at a particular point in time. Evidently, the formulation is very similar to equation (2). It follows that also in the imperfect information framework the parameter λ is equal to the degree of informational rigidity, which is given by 1-G in the theoretical model.

An alternative test of forecast efficiency suggested by Nordhaus (1987) is to regress forecast errors—rather than contemporaneous revisions as in equation (1)—on past revisions. The two tests are equivalent, i.e. the alternative test equation also yields estimates of the degree of informational rigidities (Coibion and Gorodnichenko, 2010). The main advantage of using our specification is that it does not rely on the actual outcomes and, hence, side-steps the issue of what vintage of the actual data to use in computing the forecast error.

It is reasonable to expect that information rigidities vary over the forecast horizon. They might, for instance, be more pronounced at longer horizons because (under sticky information) agents might have less resources available to obtain information relevant for forecast updating4 at a high frequency and/or (under imperfect information) face noisier signals and, hence, would place less weight on new information. While these two arguments suggest that the degree of information rigidity is monotonically increasing with the forecast horizon, there might be other effects at work that are non-monotonic functions of the forecast horizon (e.g. differences across institutions in their forecast cycles).

To examine empirically how the degree of forecast smoothing changes over the forecast horizon, we add interaction terms between forecast horizons and lagged revisions in equation (1). The resulting specification is:

r¯i,t,h=βi+λr¯i,t,h+k+ΣmλmI(hm)r¯i,t,h+k+ui,t,h,(5)

where all variables are defined as above, m is the index for the interaction terms of forecast revisions and horizons, and l(hm) is an indicator function that equals 1 if the horizon of an observation is equal to hm and 0 otherwise. The coefficients on the interaction terms are expected to be positive and rising with the forecast horizon.

We estimate the fixed-effect panel data model using the ordinary least squares (OLS) estimator. Since our data set potentially has a complicated correlation structure due to the three dimensions of the data, we correct standard errors by the method suggested by Driscoll and Kraay (1998), which does not require strong assumptions on the form of cross-sectional and temporal correlation in the error terms. Since the time dimension of our panel data set is large, the Nickell (1981) bias, which is of the order 1/T, is likely to be only of modest size.

B. Individual Forecasts

Testing efficiency of individual forecasts is analogous to the test for average forecasts. An individual forecast version of equation (5) is given by:

rj,i,t,h=βj,i+λrj,i,t,h+k+ΣmλmI(hm)rj,i,t,h+k+uj,i,t,h,(6)

where rj,i,t,h is the revision of an individual forecast by forecaster j for country i and target year t at horizon h.5 Again, if λ + λm = 0, forecast revisions at horizon hm are efficient. Otherwise, this null hypothesis is rejected.

When estimated on individual forecasts, the autocorrelation coefficient λ should be interpreted as a general measure of the degree of forecast smoothing, which reflects behavioral features or deviations from efficiency. It cannot be directly linked to the parameters of the theoretical models discussed above. In the case of sticky information, there is no correlation between current forecast revisions and last period’s forecast revisions at the level of an individual agent because agents either fail to update their forecast or they update by moving directly to the full information rational expectations forecast. In the case of imperfect information, the error term in the regression of the current revision on past revisions will (most likely) be correlated with the current forecast revision (Coibion and Gorodnichenko, 2010, p. 7), and the OLS estimator will be biased in this case. An instrumental variable (IV) approach may be a solution but there are no obviously good instruments. Lagged revisions are inappropriate as instruments under the null hypothesis, which implies that individual forecast revisions are uncorrelated over time.

In any case, we also estimated model (1) using the general methods of moments (GMM) approach suggested by Arellano and Bond (1991) and Arellano and Bover (1995) as a robustness check. We allowed standard errors to be correlated between any observations that refer to the same country and the same forecasting period.6 Overall, the results indicated that the differences between estimates based on the OLS estimator and those based on GMM are small and that the set of instruments is invalid or weak in most cases. Thus, we focus on OLS estimates in the remainder of this paper.

Although the autocorrelation coefficients λ cannot be directly linked to the degree of information rigidities, one can measure the extent of rigidities owing to sticky information non-parametrically by recovering the rate of information updating directly from the individual forecasts. An estimator for the probability of forecast updating is given by the fraction of individuals that update their forecasts (Andrade and Le Bihan, 2013). In our setting, these fractions can be calculated as the share of forecasters who revised their forecasts at least once during the 3 months prior to a given point in time. This approach makes the fractions comparable to the coefficients on the lagged revisions from equation (5) where we calculate revisions of the average forecasts over k*= 3 months.

III. Data and Descriptive Statistics

Our analysis is based on forecasts for annual GDP growth from a cross-country survey data set compiled by Consensus Economics Inc. This data set contains a variety of macroeconomic forecasts made by public and private economic institutions, mostly banks and research institutes. Starting in October 1989, the survey has been conducted at a monthly frequency in a growing number of countries. The survey process is the same in all countries: during the first two weeks of each month the forecasters send their responses and the data are published in the middle of each month. Thus, when making their forecasts the panelists are likely to be aware of each of their competitors’ forecasts from one month ago.

Because it covers a large number of countries (and variables) the data set has been used in a number of empirical studies, among others by Loungani (2001), Isiklar and others (2006), Batchelor (2007), Ager and others (2009), Loungani and others (2013), Gallo and others (2002), Lahiri and Sheng (2008), Dovern and Weisser (2011) and Dovern, Fritsche and Slacalek (2012). Only the last four studies, however, make use of the fact that the data set provides all individual forecasts of the panel of forecasters for each country in addition to the central forecast tendency, which has been used in the other studies.

Due to the fact that Consensus Economics Inc. asks the forecasters to report their forecasts for the annual GDP growth rates of the current and the next calendar year, the data set has a three-dimensional panel structure of the kind formalized in Davies and Lahiri (1995). For each target year, the data set contains a sequence of 24 forecasts of each panelist made between January of the year before the target year and December of the target year.

We include all countries in our sample, for which Consensus Economics Inc. reports individual forecasts. We include only those forecasters that reported their growth forecasts at least 10 times. The data were retrieved directly from Consensus Economics Inc. and cleaned in the following way. First, since forecasters are not identified by a unique ID in the data set but by (sometimes different versions of their) names, we concatenated those forecast series that belong to a single forecaster who showed up under different names (e.g. we treat forecasts corresponding to “Mortgage Bankers Assoc”, “Mortgage Bankers” and “Mortgage Bankers Association” as coming from the same forecaster). Second, when there were mergers or acquisitions, we kept the forecasts when it was evident which forecaster continued to produce the forecasts after the merger (e.g., we treated forecasts corresponding to “First Boston”, “CS First Boston”, “Credit Suisse First Boston” and “Credit Suisse” as coming from the same forecaster). The other forecaster involved in the merger or acquisition was assumed to leave the panel after the merger.

In total, we end up with 188,639 individual forecasts from 36 different countries, of which 104,894 are from 14 advanced economies (Table 1). The forecasts are made for target years between 1989 and 2011 with the number of observations increasing towards the end of the sample as more and more countries were covered by the survey and the average number of panelists per country increased. On average, our data set includes nearly 16 individual forecasts per period for each country. The forecasts seem to have a tendency to slightly overestimate growth in the emerging economies when measured against the current data vintages for GDP growth. (Real-time data vintages are not available for all countries in the sample.)

Table 1.

Basic Features of Forecast Data

article image
Note: The advanced economies in our sample are: Australia, Canada, France, Germany, Italy, Japan, The Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, United Kingdom and United States. The emerging economies in our sample are: Argentina, Brazil, Chile, China, Colombia, HongKong, India, Indonesia, Malaysia, Mexico, Peru, Philippines, Singapore, South Korea, Taiwan POC, Thailand and Venezuela.Source: Authors’ estimates.

As expected, the average root mean squared forecasts error (RMSFE) declines with the forecast horizon (Figure 1). In other words, forecast errors become smaller towards the end of the target year (as the horizon, h, approaches 1). RMSFEs for emerging economies are, on average, more than twice as high as for advanced economies for large forecast horizons and still almost 75 percent higher at the end of the target year.

Figure 1.
Figure 1.

Root Mean Squared Forecast Errors over Forecast Horizons

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Note: h refers to the forecast horizon.

The size of forecast revisions evolves differently over the forecast horizons for advanced economies and emerging economies (Figure 2). Though the relationships are not monotonic for both country groups, their patterns differ. For advanced economies, the revisions are larger around the turn of the year than at very earlier and very late forecast horizons—and in general the average size of the revisions does not vary much with the forecast horizon. For emerging economies, revisions are much smaller for very long forecast horizons and much higher during the target year (h<=12). At the end of the target year (h =1) the average revision in emerging economies is about twice as large as for advanced economies. The latter indicates that in emerging economies uncertainty about the actual data is substantially higher than in advanced economies just before the end of the forecasting horizon, possibly owing to lags in statistical data collection and poor quality of initial data releases.

Figure 2.
Figure 2.

Mean Absolute Revisions over Forecast Horizons

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Note: h refers to the forecast horizon.

The distribution of forecast revisions shows that forecasts are frequently changed only little or not at all (as indicated by the high density around zero, Figure 3). Except for the large spikes at zero, the distributions at all horizons follow a neat unimodal bell-shaped distribution. The distribution of revisions is more flattened out for emerging economies than for advanced economies; here, large forecast revisions are more frequent—reflecting higher volatilities of the target variables and, presumably, larger revisions to preliminary official statistics as well as the fact that forecasts initially remain unchanged more often than in advanced economies (the fraction of little or zero revisions is much higher than in advanced economies).

Figure 3.
Figure 3.

Distribution of Forecast Revisions

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Note: h refers to the forecast horizon. For each case, revisions are computed over k=3 months.

The data show a skewed distribution of the revisions for emerging economies and advanced economies: they are significantly negatively skewed for all horizons, i.e., there is a tendency for negative revisions to be less frequent but larger than upward revisions—reflecting the asymmetric nature of business cycles.

Forecasts become more clustered as the forecast horizon shrinks (Table 2). Deviations from the average forecast follow a unimodal distribution for all forecast horizons with most of the forecasts being close to the average forecast (Figure 4). For the advanced economies only very few deviations are larger than half a percentage point. In contrast, the dispersion is considerably larger for the emerging economies, where the data show a considerable degree of disagreement across forecasters even at the end of the target year (h =1). Again, this is a reflection of the fact that uncertainty about the actual data release is substantially larger here than in advanced economies.

Table 2.

Revisions and Deviations from the Average Forecast

article image
Source: Authors’ estimates.
Figure 4.
Figure 4.

Distribution of Deviation of Individual Forecasts from Average (Consensus)

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Note: h refers to the forecast horizon.

IV. Empirical Findings

A. Average Forecasts

The left-hand side of Table 3 provides the results of estimating equation (5) on average forecasts. As stated above, we choose k =3 as the horizon over which revisions are calculated. This is in line with a quarterly frequency of updating forecasts. (Using one-month horizon results in many zero values.) The horizons we pick for our estimations are h= 1, 4, 7, 10, 13 and 16.7

Table 3.

Information Rigidity and Forecast Smoothing

article image
Source: Authors’ estimates.Note: Numbers below the coefficients are t-statistics. Asterisks indicate the degree of significance of coefficients: *** 1 percent, ** 5 percent, and * 10 percent. Regressions include a fixed effect for each country for average forecasts and a fixed effect for each forecaster for individual forecasts; the constants are identified by restricting the sum of all fixed effects to equal 0. The ratio of coefficients on past revisions is defined as the quotient of the baseline rigidity parameter for individual revisions and the equivalent for the revisions of the average forecast. Results are obtained by skipping the forecast data made in December 2008 for the growth rate of 2009, which are heavily driven by the adjustment of forecasts to the progression of the Great Recession. Including these observations leads to an increase of the effect of “Past revision*Horizon 13” to about 0.84 (for both estimators). This would imply a total rigidity parameter of above 1, which is not consistent with any theory of informational rigidities.

We find strong and consistent evidence of information rigidities in consensus forecasts. There is a strong positive correlation between the current forecast revision and its first lag for all country groups and for both estimation methods. Coefficients on lagged revisions are highly statistically significant in all cases.

The extent of information rigidities appears to be broadly similar in forecasts for advanced and emerging economies. The coefficient on lagged revisions for emerging economies at very short forecast horizons is 0.41 compared to 0.37 for advanced economies.

Information rigidities tend to be larger around the turn of the years, i.e., at forecast horizons between 13 and 10 (Figure 5). Possible explanations for this pattern could relate to the fact that the quarter-on-quarter growth rates for the last quarter of a year have a particularly large effect on the annual growth rate of the following year. There may also be institutional or behavioral explanations where forecasters switch focus from the current year forecasts to the next year forecasts around the turn of the year. Coefficients on interaction terms between lagged revisions and the horizon-indicator function are positive and statistically significant, however, only for horizons 10 (emerging economies) and 13 (advanced economies) respectively.8 For other horizons the additional effects are much smaller and not significantly different from zero.

Figure 5.
Figure 5.

Informational Rigidities at Different Forecast Horizons (Consensus)

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Note: h refers to the forecast horizon.

When we do not condition on the length of the forecast horizon, the degree of informational rigidity estimated with our specification based on the average forecast revisions is equal to 0.5 for both advanced and emerging economies. Given that we measure revisions at a quarterly frequency, these estimates imply in the sticky information framework that forecasters update their forecasts about every six months on average. This is a higher updating frequency than it is found in other papers estimating sticky information models based on aggregate expectation data for smaller sets of countries (e.g. Mankiw and Reis, 2002, Khan and Zhu, 2006, Döpke and others, 2008). Analogously, for the imperfect information framework the estimates imply a weight of about 0.5 assigned to past forecasts in the construction of the current forecasts (see equation (4)). This is considerably higher than the estimate of 0.14 presented in Coibion and Gorodnichenko for the United States (2012, p. 143).

B. Individual Forecasts

Information Stickiness

Next, we measure the frequency of forecast updating from individual data. The share of forecasters who chose to update their forecasts at least once in the three months prior to a given forecast horizon ranges between 0.8 and 0.9 over the forecast horizons (Figure 6). This shows that most forecasters choose to update their forecasts quite frequently.9 These estimates are close to those obtained by Andrade and Le Bihan (2013) for the European Survey of Professional Forecasters. Average fractions for advanced economies tend to be higher than those for emerging economies, suggesting that forecasts for advanced economies are revised more frequently than those for emerging economies.

Figure 6.
Figure 6.

Fractions of Revised Individual Forecasts

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Source: Authors’ estimates.Note: Fractions show how many forecasters on average revised their forecasts at least once three months prior to the forecast horizon indicated on the horizontal axis.

There is a slight tendency that the share of forecasters that update their forecasts increases as the forecast horizon shrinks for both country groups. In addition, there is a hump around the turn of the year, i.e., at about h =13 for most countries. This is consistent with the basic statistical evidence on the pattern of the size of revisions over forecast horizons shown above.

Fractions obtained from individual forecast data are considerably higher than the implied estimates for the share of forecasters that updates their forecasts each quarter shown in the previous section. The coefficients on lagged revisions estimated using average forecast data range from .37-.12=.25 (advanced economies, h=4) to .37+.57=.94 (advanced economies, h=13); following equation (3) these imply estimates for the probability to update a forecast in a given quarter of between only 1-.94=0.06 and 1-.25=0.75—compared to 0.8-0.9 based on the individual data. Clearly, the latter results imply a higher frequency of updating than suggested by the regressions based on average forecast data shown before—and hence a smaller role of sticky information in explaining the overall degree of information rigidity in economic forecasts.

In contrast, a high share of forecasters who update their forecasts is perfectly consistent with the theory of imperfect information. In fact, in its pure form the theory actually predicts that all forecasters continuously update their forecasts. But Andrade and Le Bihan (2013) demonstrate that the friction introduced by the usual convention to round published forecasts to the first digit results in a plausible estimate of this share of about 0.8 to 0.9. Actually, their simulations predict that the share should be smaller for long forecast horizons than for short-term forecasts, which is in line with our estimates. Thus, our findings are broadly consistent with the theory of imperfect information but they provide evidence against the theory of sticky information.

Forecast Smoothing

Regression analysis shows strong evidence of forecast smoothing in individual forecasts. The right-hand side of Table 3 reports results of estimating equation (6) using the individual forecast data. The coefficient on the lagged revision (which, as discussed in Section II, provides a measure of general forecast smoothing rather than an exact mapping to the existing information theories) is positive and statistically significant in all specifications.

Thus, while the results of the previous section suggest that informational stickiness is not a big issue in our data set, these estimates imply that individual forecasters smooth their forecasts due to other factors. The degree of smoothing is estimated, however, to be smaller than that for the consensus forecasts. The magnitude of the difference is given in the row labeled “Ratio of Coefficients on Past Revisions”, which shows the ratio of the coefficient on lagged revisions estimated on individual forecasts to that estimated on average forecasts; the estimate of persistence in forecast revisions is about halved. This suggests that the process of averaging forecasts induces additional stickiness.

As with consensus forecasts, we find differences in the extent of smoothing in forecasts for advanced and emerging economies. Coefficients on lagged revisions are higher in the case of emerging economies (0.23 versus 0.13). These results are consistent with graphical evidence discussed in Section III (Figure 4) and suggest that information rigidities are more pronounced in forecasts for emerging economies, possibly owing to greater lags in data releases, weaker quality of economic statistics, and the fact that probably less resources are spend for the production of these forecasts relative to those spend for advanced economies forecasts.

Also similar to the regressions based on average forecasts, those based on individual forecast data suggest that forecast smoothing is non-monotonic over forecast horizons. For advanced economies, coefficients on interaction terms between lagged revisions and horizon variables are strongly positive for horizons 7, 10 and 13; the largest size of the coefficients on the interaction terms is obtained at h =13. Looking at the Driscoll-Kraay standard errors, however, shows that the effects are not significantly different from zero in most cases. For emerging economies, the results pertaining to the interaction terms are even weaker; we do not report any significant effects. Overall, the conclusion is that while forecast smoothing at the individual level increases somewhat at the medium-range forecast horizons, the evidence for the horizon effect is even weaker than in the regressions based on aggregate forecast data.

Looking more closely at the distribution of forecast persistence across countries reveals substantial variation. Table 4 shows estimates from country-specific estimations of equation (1) based on revisions of average forecasts and summary statistics (for each country) for forecaster-specific estimations of the same model based on individual forecast revisions.10 The results in this table provide strong confirmation of our previous findings. For 29 out of 31 countries, the smoothing parameter from the average forecasts is higher than that from the individual forecasts, in most cases by a substantial margin.11 In some rare cases, the variation of individual forecast revisions is explained to a substantial degree by lagged revisions (based on the average R2 from the regressions, e. g. in Germany (.35) or Italy (.44)); but in general, revisions seem to be quite unpredictable based on past revisions at the individual level.

Table 4.

Country-Specific Estimates

article image
Source: Authors’ estimates.Note: Estimates in the left part of the table refer to country-specific estimations based on the revision of the average forecast for each country. Estimates in the right part of the table are average figures based on separate estimations for each individual forecaster in each country. All country- and forecaster-specific models were estimated without any forecast-horizon controls. λ denotes the estimated coefficient for the first lag of the 3-months revision (Avg. λ refers to the average across all individual estimates for each country). Sd (Avg. sd) is the corresponding (average) standard deviation. N (Avg. N) shows the (average) number of observations for each estimation. K denotes the number of different forecasters in each country for which an estimate is obtained. Avg. Frac. displays the fraction of forecasters that, on average, adjust their forecasts at least once during a three months period (computed across all forecast horizons).

Figure 7 shows the distribution of the estimated parameters across countries. Two main conclusions can be drawn from this graph, both of which confirm the previous panel-based findings. First, the degree of rigidity is less pronounced in the individual data than in the consensus data; in both advanced and emerging economies the average smoothing parameter based on average revisions is about 3 times as large as the average smoothing parameter based on individual revisions. Second, there is no substantial difference in the average rigidity between advanced economies and emerging economies. The only difference is that estimates are somewhat more dispersed within the group of emerging economies compared to the estimates within the group of advanced economies.12 A similar conclusion can be drawn from the country-specific estimates for the fraction of forecasters that, on average, update their growth forecasts at least once during a three months period: The average estimate for advanced economies (0.85) is somewhat higher than the corresponding estimate for emerging economies (0.79), but given the standard deviation across countries (approximately 0.06 in both cases) this difference is statistically insignificant.

Figure 7.
Figure 7.

Distribution of Information Rigidity Coefficients across Countries

Citation: IMF Working Papers 2014, 031; 10.5089/9781484305201.001.A001

Source: Authors’ estimates.Note: Density estimates based on the different estimates for all countries in the sample using a Gaussian kernel and a bandwidth of 0.1.

V. Conclusion

This paper has provided evidence on the dynamics of forecast revisions of real GDP growth using a large panel data set of individual forecasters in 36 advanced and emerging market economies for the period 1989 to 2011. The data set used in the paper is far larger than any panel of individual forecasts used in the previous literature, and it covers a wide range of different countries.

Previous work has documented that forecasts are characterized by a significant degree of smoothing or rigidity (Nordhaus, 1987; Coibion and Gorodnichenko, 2010), and a number of theories have been offered for explaining this phenomenon. In particular, Coibion and Gorodnichenko show that finding a correlation between forecast errors and past forecast revisions is consistent with two of the leading explanations for forecast smoothing, viz., the sticky information model and the “noisy” information model.

Using an equivalent test of forecast revisions on past forecast revisions, we confirm the finding of persistence in average forecast revisions. We also contribute novel perspectives on forecasters’ behavior, drawing on our large set of individual forecasts for advanced and emerging countries.

In particular, we provide evidence against the usefulness of the sticky information model to describe the dynamics of growth forecasts. We show that the estimates of informational rigidity based on consensus (average) forecasts overstate the true degree of forecasters’ inattentiveness. When consensus forecasts are used, which has been the common practice in previous studies, estimates suggest that forecasts are updated on average every 6 months. Our analysis of fractions of forecasters who update their forecasts, however, points to a higher frequency of updating. The evidence based on fractions, hence, suggests a small role of sticky information in explaining the overall degree of information rigidity in economic forecasts. The predictability of individual forecast revisions also casts some doubt on the validity of the sticky information theory.

Many interesting issues are left for the future research. In particular, herding, another prominent feature of forecasters’ behavior (Gallo and others, 2002), and its interaction with forecast smoothing deserve a closer look. In addition, there is evidence of nonlinearities in forecast smoothing in our sample.13 Further topics that are worth being explored are implications of uncertainty for the dynamics of macroeconomic forecasting and the evolution of forecast rigidities over the business cycle.

References

  • Ager, P., M. Kappler, and S. Osterloh, 2009, “The Accuracy and Efficiency of the Consensus Forecasts: A Further Application and Extension of the Pooled Approach,” International Journal of Forecasting, Vol. 25, No. 1, pp. 16781.

    • Search Google Scholar
    • Export Citation
  • Andrade, P., and H. Le Bihan, 2013, “Inattentive Professional Forecasters,” Journal of Monetary Economics, Elsevier, Vol. 60, No. 8, pp. 96782.

    • Search Google Scholar
    • Export Citation
  • Arellano, M. and S. Bond, 1991, “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations,” Review of Economics Studies, Vol. 58 (April), pp. 27797.

    • Search Google Scholar
    • Export Citation
  • Arellano, M. and O. Bover, 1995, “Another Look at the Instrumental Variable Estimation of Error-components Models,” Journal of Econometrics, Vol. 68 (July), pp. 2951.

    • Search Google Scholar
    • Export Citation
  • Batchelor, R., 2007, “Bias in Macroeconomic Forecasts,” International Journal of Forecasting, Elsevier, Vol. 23 (April-June), pp. 18903.

    • Search Google Scholar
    • Export Citation
  • Benabou, R., 2009Groupthink: Collective Delusions in Organizations and Markets,” NBER Working Paper No. 14764, (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Coibion, O. and Y. Gorodnichenko, 2010, “Information Rigidity and the Expectations Formation Process: A Simple Framework and New Facts,” NBER Working Papers No. 16537 (Cambridge, Massachusetts: National Bureau of Economic Research).

    • Search Google Scholar
    • Export Citation
  • Coibion, O. and Y. Gorodnichenko, 2012, “What Can Survey Forecasts Tell Us about Information Rigidities?Journal of Political Economy, Vol. 120, No. 1, pp. 116159.

    • Search Google Scholar
    • Export Citation
  • Crowe, C., 2010, “Consensus Forecasts and Inefficient Information Aggregation,” IMF Working Paper 10/178 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Davies, A. and K. Lahiri, 1995, “A New Framework for Analysing Survey Forecasts Using Three-dimensional Panel Data,” Journal of Econometrics, Vol. 68 (July), pp. 205227.

    • Search Google Scholar
    • Export Citation
  • Döpke, J., J. Dovern, U. Fritsche, and J. Slacalek, 2008, “Sticky Information Phillips Curves: European Evidence,” Journal of Money, Credit, and Banking, Vol. 40 (October), pp. 151319.

    • Search Google Scholar
    • Export Citation
  • Dovern, J. and J. Weisser, 2011, “Accuracy, Unbiasedness and Efficiency of Professional Macroeconomic Forecasts: An Empirical Comparison for the G7,” International Journal of Forecasting, 27 (April), pp. 45265.

    • Search Google Scholar
    • Export Citation
  • Dovern, J., U. Fritsche, and J. Slacalek, 2012, “Disagreement among Forecasters in G7 Countries,” Review of Economics and Statistics, No. 94 (November), pp. 108196.

    • Search Google Scholar
    • Export Citation
  • Dovern, J., 2013, “When Are GDP Forecasts Updated? Evidence from a Large International Panel,” Economics Letters, Vol. 120 (September) pp. 52123.

    • Search Google Scholar
    • Export Citation
  • Dovern, J., U. Fritsche, P. Loungani, and N. Tamirisa, forthcoming, “Are Informational Rigidities of Macroeconomic Forecasts Influenced by the Business Cycle?unpublished.

    • Search Google Scholar
    • Export Citation
  • Driscoll, J.C., and A. C. Kraay, 1998, “Consistent Covariance Matrix Estimation with Spatially Dependent Panel Data,” Review of Economics and Statistics, Vol. 80 (November), pp. 54960.

    • Search Google Scholar
    • Export Citation
  • Gallo, G.M., C.W.J. Granger, and Y. Jeon, 2002, “Copycats and Common Swings: The Impact of the Use of Forecasts in Information Sets,” Staff Papers, International Monetary Fund, Vol. 49, No. 1, pp. 421.

    • Search Google Scholar
    • Export Citation
  • Isiklar, G., K. Lahiri, and P. Loungani, 2006, “How Quickly Do Forecasters Incorporate News?Journal of Applied Econometrics, Vol. 21 (September/October), pp. 70325.

    • Search Google Scholar
    • Export Citation
  • Khan, H., and Z. Zhu, 2006, “Estimates of the Sticky-Information Phillips Curve for the United States,” Journal of Money, Credit & Banking, Vol. 38, No. 1, pp. 195207.

    • Search Google Scholar
    • Export Citation
  • Lahiri, K. and X. Sheng, 2008, “Evolution of Forecast Disagreement in a Bayesian Learning Model,” Journal of Econometrics, Vol. 144, (June), pp. 32540.

    • Search Google Scholar
    • Export Citation
  • Loungani, P., 2001, “How Accurate Are Private Sector Forecasts? Cross-country Evidence from Consensus Forecasts of Output Growth,” International Journal of Forecasting, Vol. 17, (July/September), pp. 41932.

    • Search Google Scholar
    • Export Citation
  • Loungani, P., H. Stekler and N. Tamirisa, 2013, “Information Rigidity in Growth Forecasts: Some Cross-Country Evidence,” International Journal of Forecasting, Vol. 29 (October/December), pp. 60521.

    • Search Google Scholar
    • Export Citation
  • Mankiw, G. and R. Reis, 2002, “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve,” Quarterly Journal of Economics, Vol. 117 (November), pp. 1295328.

    • Search Google Scholar
    • Export Citation
  • Nickell, S., 1981, “Biases in Dynamic Models with Fixed Effects,” Econometrica, Vol. 49 (November), pp. 141726.

  • Nordhaus, W., 1987, “Forecasting Efficiency: Concepts and Applications,” The Review of Economics and Statistics, Vol. 69 (November), pp. 66774.

    • Search Google Scholar
    • Export Citation
  • Reis, R., 2006, “Inattentive Producers,” Review of Economic Studies, Vol. 73, No. 3, pp. 79321.

  • Sims, C., 2003, “Implications of Rational Inattention,” Journal of Monetary Economics, Vol. 50 (April), pp. 66590.

  • Windmeijer, Frank, 2005, “A Finite Sample Correction for the Variance of Linear Efficient Two-step GMM Estimators,” Journal of Econometrics, Elsevier, Vol. 126 (May), pp. 2551.

    • Search Google Scholar
    • Export Citation
  • Woodford, M., 2002, “Imperfect Common Knowledge and the Effects of Monetary Policy,” Knowledge, Information, and Expectations in Modern Macroeconomics: In Honor of Edmund S. Phelps, ed. by P. Aghion, R. Frydman, J. Stiglitz, and M. Woodford, (Princeton University Press).

    • Search Google Scholar
    • Export Citation
1

Heidelberg University, Universität Hamburg, and International Monetary Fund, respectively. We thank Angela Espiritu and Jair Rodriguez for excellent research assistance. We are also grateful to Oli Coibion, Yuriy Gorodnichenko, Tara Sinclair and an anonymous referee for valuable suggestions on an earlier draft, as well as participants in the 2014 AEA meetings, 2012 George-Washington-University-IMF Forecasting Forum, the 2011 International Symposium on Forecasting, the 2012 Econometric Society’s Australasian Meeting, and seminars at the George Washington University and Hamburg University for helpful comments.

2

Many of the underlying theories for forecast smoothing are formulated at the level of the individual forecasters. Although their aggregate implications are often drawn based on averaging across individual forecasters, the mean estimate of forecast smoothing based on individual data need not be the same as the estimate of forecast smoothing based on the consensus data. The bias induced by aggregation has been well recognized in the literature (Crowe, 2010); such bias can be avoided by using individual data (Andrade and Le Bihan, 2013).

3

We assume k = 3 (= k*) throughout this paper.

4

Note that in the original version of the sticky information framework (Mankiw and Reis, 2002) the degree of forecast rigidity is assumed to be an exogenously given constant.

5

As for average forecasts, we set k* = k = 3 also for the analysis of individual forecasts.

6

We use both the first lag of the revision and the first lag of underlying forecast as instruments in the two-step system GMM with Windmeijer (2005) corrected standard errors – taking into account a possible downward bias in two-step GMM estimations. The maximum lag length for the transformed model was set to 2.

7

Since all specifications include lagged revisions as an explanatory variable, we “lose” one observation per target year (h =19) for the estimation. Our results are robust to the choice of horizons; that is, for example, if we pick h =2, 5, 8, etc.

8

As noted in the footnote to Table 3, if we do not exclude forecasts made in December 2008 for the growth rate of 2009, which are heavily driven by the adjustment of forecasts in the aftermath of Lehman collapse, we even obtain an estimate of the total degree of informational rigidity (for advanced economies and h =13) larger than 1. This is not consistent with informational rigidity theories and an extreme demonstration of the fact that the predictability of aggregate revisions tends to increase during recessions. The topic of information rigidities and uncertainty is explored in more detail in a companion paper (Dovern and others, forthcoming).

9

Though, often they change it very little as shown in Section III.

10

We neglect any horizon-specific effects at this point, since the results from the panel regressions above indicate that most of these effects are not statistically significant and since the small number of available observations for some of the individual forecasters calls for a parsimonious specification.

11

The country-specific estimates do reveal some negative values for the smoothing parameter for a few countries (e.g India). While negative estimates for λ are not consistent with any of the theoretical explanations for rigidity considered here in this paper, they are consistent with behavior in which forecasters react too strongly in response to new information—so that some of the revision has to be reversed during the next period.

12

In general, the observation that the degree of forecast smoothing differs widely across individuals and countries is complementary and similar to the finding in Dovern (2013) who shows that the frequency of forecast updating differs substantially across individuals, countries and time.

13

Smoothing is less pronounced in the tails of the distribution of individual forecast revisions than in the main body of the distribution (see working paper version of this paper for preliminary evidence). It remains an open question, however, how these nonlinearities can be linked to the different theories of forecast generation mentioned in this paper.

Information Rigidities: Comparing Average and Individual Forecasts for a Large International Panel
Author: Jonas Dovern, Mr. Ulrich Fritsche, Mr. Prakash Loungani, and Ms. Natalia T. Tamirisa