Information Rigidities in Economic Growth Forecasts
Evidence from a Large International Panel
  • 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund
  • | 2 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund

We examine the behavior of forecasts for real GDP growth using a large panel of individual forecasts from 30 advanced and emerging economies during 1989–2010. Our main findings are as follows. First, our evidence does not support the validity of the sticky information model (Mankiw and Reis, 2002) for describing the dynamics of professional growth forecasts. Instead, the empirical evidence is more in line with implications of "noisy" information models (Woodford, 2002; Sims, 2003). Second, we find that information rigidities are more pronounced in emerging economies than advanced economies. Third, there is evidence of nonlinearities in forecast smoothing. It is less pronounced in the tails of the distribution of individual forecast revisions than in the central part of the distribution.

Abstract

We examine the behavior of forecasts for real GDP growth using a large panel of individual forecasts from 30 advanced and emerging economies during 1989–2010. Our main findings are as follows. First, our evidence does not support the validity of the sticky information model (Mankiw and Reis, 2002) for describing the dynamics of professional growth forecasts. Instead, the empirical evidence is more in line with implications of "noisy" information models (Woodford, 2002; Sims, 2003). Second, we find that information rigidities are more pronounced in emerging economies than advanced economies. Third, there is evidence of nonlinearities in forecast smoothing. It is less pronounced in the tails of the distribution of individual forecast revisions than in the central part of the distribution.

I. Introduction

Expectations—and their reflection in forecasts—play a central role in macroeconomics. The development of the concept of rational expectations in the 1960s and 1970s was mirrored in the development of tests of forecast efficiency. The natural analog to rational expectations is the concept of forecast efficiency, which, in its strong form, states that forecast errors should be orthogonal to all relevant available information.

In practice, there are limitations on testing strong form efficiency, for example, because the information set used by forecasters may not be publicly known or available. Hence, Nordhaus (1987) and others developed the concept of weak efficiency, which states that forecast errors should be orthogonal to the information contained in the forecaster’s set of past forecasts. Nordhaus states that: “weak efficiency is an attractive concept, first, because past forecasts are likely to play a very important role in determining current forecasts. Forecasters tend to have a certain consistency (stickiness?) in their views of the world, so that recent forecasts will go far in explaining current forecasts. Second, of all variables that seem plausible candidates for inclusion in a forecaster’s information set, surely the forecasters own views must rate quite high.”

In the case of fixed event forecasts (i.e. a sequence of forecasts made about a given event, such as real GDP growth for a given year), Nordhaus develops two “simple and powerful tests” of weak efficiency. The first test is that the forecast error should be independent of past forecasts revisions, and the second test is that today’s forecast revision should be independent of past forecast revisions.

Over the years, a number of explanations have been offered for why forecasts revisions may be correlated. One theory, due to Mankiw and Reis (2002), states that forecasters update their information sets infrequently because there are fixed costs of acquiring information. In a second theory, developed in Woodford (2002) and Sims (2003), forecasters continually update their information sets but, because they receive noisy signals about the true state of the economy, their forecast revisions are correlated. In an important set of papers, Coibion and Gorodnichenko (2010, 2012) showed that canonical versions of both classes of models—dubbed respectively as the ‘sticky information’ and ‘imperfect information’ models—have the feature that the forecast error should be correlated with the forecast revision, which is the first of the two tests proposed by Nordhaus.

A third class of theories suggests behavioral explanations for forecast rigidity, and these are mentioned by Nordhaus as an explanation for his findings. Citing Tversky and Kahneman (1981), he states that “we tend to break the good or bad news to ourselves slowly, taking too long to allow surprises to be incorporated into our forecasts.” A fourth category of explanations relies on forecasters having non-standard loss functions, which deviate from the benchmark case of a loss function that is symmetric and only dependent on the forecast errors. For instance, Nordhaus (1987) conjectures that forecast smoothing might arise because “a more accurate but jumpy” forecast may be difficult to explain to the users of the forecasts. Alternatively, loss functions could be asymmetric: a “hawkish” central bank could, for instance, value negative inflation forecast errors higher than positive ones, i.e. making an inflation forecast that is too low could be subjectively more costly for this central bank than overestimating inflation.

To summarize, smoothing appears to be a feature of forecasts and there are four classes of explanations, which are not mutually exclusive, for why this feature might arise, viz. sticky information, imperfect information, behavioral reasons, and the existence of asymmetric loss functions.

Against this background, this paper provides evidence on correlation in forecast revisions in individual forecasts of real GDP growth for a large number of countries. Our use of individual forecast data from 30 countries offers two main advantages.

First, many of the underlying theories for forecast smoothing are formulated at the level of the individual forecasters. Although their aggregate implications are often drawn based on averaging across individual forecasters, the mean estimate of forecast smoothing based on individual data need not be the same as the estimate of forecast smoothing based on the consensus data. The bias induced by aggregation has been well recognized in the literature (Crowe, 2010); such bias can be avoided by using individual data (Andrade and Le Bihan, 2010).

Second, the broad country coverage of our study provides an opportunity to compare the extent of smoothing in forecasts for advanced and emerging economies. We find that correlations of forecast revisions for emerging economies are stronger than those for advanced economies. Forecasts for emerging economies may be less efficient than those for advanced economies because of poorer data quality and faster structural change of these economies.

The evidence that we provide on positive correlation in forecast revisions confirms findings of several previous studies regarding the ubiquity of forecast smoothing. The value-added of our study is to show that the degree of smoothing estimated from individual forecast data is far lower than estimates obtained from average forecasts on which most of the previous studies have been based. Using the individual data we show that the empirical evidence on forecast smoothing lends more validity to the models of “noisy” information than to the models of sticky information.

In addition, we show that non-linearities play an important role in the dynamics of growth forecasts. Forecast smoothing is less pronounced in the tails of the distribution than in its central portion, i.e., large negative and positive revisions are less correlated with subsequent revisions than average revisions.

The paper is organized as follows: Section II discusses the methodology for testing for the degree of forecast smoothing using average and individual forecast data. Section III describes our data on forecasts. Section IV presents the empirical results. The last section concludes.

II. Methodology for Testing for Forecast Smoothing

A. Average Forecasts

The test for forecast smoothing (forecast efficiency) proposed by Nordhaus (1987) exploits the fact that we have a sequence of forecasts for the same event, viz., annual real GDP growth. Under the null of full information rational expectations, this sequence of forecasts must follow a martingale process. To implement the test on average forecasts, the contemporaneous revision of an average forecast is regressed on past forecast revisions:

r¯i,t,h=βi+λr¯i,t,h,+k+ui,t,h(1)

where r¯i,t,h is the revision of the average forecast for country i and target year t made at horizon h, and k ≥ 1. Revisions are computed over k* months, i.e. r¯i,t,h=Fi,t,hFi,t,h+k*, and k*<=k has to hold to avoid moving average effects in the residuals of the regression; we assume k*=k=3 throughout this paper.2 If λ = 0 forecasts are efficient. Otherwise, forecast revisions are correlated, and the null hypothesis of forecast efficiency is rejected.

Reis (2006) shows that under sticky information the average forecast (Fi,t,h) for an event xi,t is a weighted average of the lagged average forecast and the current rational expectation of the event:

Fi,t,h=λFi,t,h+k+(1-λ)[xi,t+νi,t,h],(2)

where vi,t,h is the rational expectations error. It follows that

r¯i,t,h=Fi,t,hFi,t,h+k=λr¯i,t,h+k+(1λ)+[vi,t,hvi,t,h+k]=λr¯i,t,h+k+ui,t,h.(3)

Thus, the regression coefficient from equation (1) translates directly into the degree of information rigidity in the sticky information framework (e.g. Mankiw and Reis, 2002, or Reis, 2006).

Likewise, also in the “noisy” information framework (e.g. Woodford, 2002, and Sims, 2003) the degree of informational rigidity can be directly inferred from the parameter estimates of equation (1). Coibion and Gorodnichenko (2012) show that under the assumption of a standard loss function agents optimally use the Kalman filter to update their forecasts in each period as

Fi,t,h=(1-G)Fi,t,h+k+G[xi,t+ωi,t,h],(4)

where ωi,t,h is the noise component of the information that agents have about the event xi,t at a particular point in time. Evidently, the formulation is very similar to equation (2). It follows that also in the imperfect information framework the parameter λ is equal to the degree of informational rigidity, which is given by 1-G in the theoretical model.

An alternative test of forecast efficiency suggested by Nordhaus (1987) is to regress forecast errors—rather than contemporaneous revisions as in equation (1)—on past revisions. The two tests are equivalent, i.e. also the alternative test equation yields estimates of the degree of informational rigidities (Coibion and Gorodnichenko, 2010). The main advantage of using our specification is that it does not rely on the actual outcomes and, hence, side-steps the issue of what vintage of the actual data to use in computing the forecast error.

It is reasonable to expect that information rigidities vary over the forecast horizon. They might, for instance, be more pronounced at longer horizons because (under sticky information) agents might have less resources available to obtain information relevant for forecast updating3 at a high frequency and/or (under imperfect information) face noisier signals and, hence, would place less weight on new information. While these two arguments suggest that the degree of information rigidity is monotonically increasing with the forecast horizon, there might be other (institutional) effects at work that are non-monotonic functions of the forecast horizon.

To examine empirically how the degree of forecast smoothing changes over the forecast horizon, we add interactions terms between forecast horizons and lagged revisions in equation (1). The resulting specification is:

r¯i,t,h=βi+λr¯i,t,h+k+ΣmλmI(hm)r¯i,t,h+k+ui,t,h,(5)

where all variables are defined as above, m is the index for the interaction terms of forecast revisions and horizons, and I(hm) is an indicator function that equals 1 if the horizon of an observation is equal to hm and 0 otherwise. The coefficients on the interaction terms are expected to be positive and rising with the forecast horizon.

We estimate the fixed-effect panel data model using the ordinary least squares (OLS) estimator. Since our data set potentially has a complicated correlation structure due to the three dimensions of the data, we correct standard errors by the method suggested by Driscoll and Kraay (1998), which does not require strong assumptions on the form of cross-sectional and temporal correlation in the error terms. Since the time dimension of our panel data set is large, the Nickel bias, which is of the order 1/T, is likely to be only of modest size (Nickell, 1981).

Nevertheless, as a robustness check – and because the history of forecasts for some forecasters is considerably smaller than our full sample range – we also estimate the model using the general methods of moments (GMM) approach suggested by Arellano and Bond (1991) in the version of Arellano and Bover (1995) (henceforth ABB estimator). We correct standard errors by allowing for correlation between any observations that refer to the same country and the same forecasting period. The use of the ABB estimator is subject to a caveat, however, when one uses the model to test for information rigidities. Under the null of full information the current and past revisions of forecasts are expected to be uncorrelated, implying that instruments based on revisions are not valid or at least very weak. Still, since the null hypothesis may not hold in the data and/or revisions of forecast may be autocorrelated for other reasons than informational rigidities, using the ABB estimator as a robustness check seems to be appropriate.

B. Individual Forecasts

Testing efficiency of individual forecasts is analogous to the test for average forecasts. An individual forecast version of equation (5) is given by:

rj,i,t,h=βj,i+λrj,i,t,h+k+ΣmλmI(hm)rj,i,t,h+k+uj,i,t,h,(6)

where rj,t,t,h is the revision of an individual forecast by forecaster j for country i and target year t at horizon h, and k ≥ 1. As for average forecasts we compute revisions over k*=k=3 months. Again, if β1 = 0 forecasts are efficient. Otherwise, this null hypothesis is rejected.

When estimated on individual forecasts, the autocorrelation coefficient λ should be interpreted as a general measure of the degree of forecast smoothing, which reflects behavioral features or deviations from efficiency. In any case, it must not be directly linked to the parameters of the theoretical models discussed above. In the case of sticky information, there is no correlation between current forecast revisions and last period’s forecast revisions at the level of an individual agent because agents either fail to update their forecast or they update by moving directly to the full information rational expectations forecast. In the case of imperfect information, the error term in the regression of the current revision on past revisions will (most likely) be correlated with the current forecast revision (Coibion and Gorodnichenko, 2010, p. 7), and the OLS estimator will be biased in this case. An instrumental variable (IV) approach may be a solution but there are no obviously good instruments. Using lagged revisions as an instrument implies that the instrument may be inappropriate if the null hypothesis is not rejected and individual forecast revisions turn out to be uncorrelated.

Still, one can measure the extent of information rigidities owing to sticky information non-parametrically by recovering the rate of information updating directly from the individual forecasts. An estimator for the probability of forecast updating is given by the fraction of individuals that update their forecasts (Andrade and Le Bihan, 2010). In our setting, these fractions can be calculated as the share of forecasters who revised their forecasts at least once during the 3 months prior to a given point in time. This approach makes the fractions comparable to the coefficients on the lagged revisions from equation (5) where we calculate revisions of the average forecasts over k*=3 months.

C. Nonlinearity

Our last set of regressions explores the extent of nonlinearity in individual forecast smoothing. There are many potential reasons for why the dynamics of forecast revisions could exhibit forms of nonlinearity. For example, forecast smoothing could be less pronounced than normal following large revisions because a forecaster decided to incorporate a new piece of information fully into the next forecast rather than adjusting her forecast smoothly over time using a sequence of small revisions into one direction. Likewise, the relative position of a forecast in the distribution of available forecasts could influence a forecasters smoothing behavior. To address these questions, we use threshold models to examine differences in forecast smoothing in the tails and the central portion of distributions of revisions.

III. Data and Descriptive Statistics

Our analysis is based on forecasts for annual GDP growth from a cross-country survey data set compiled by Consensus Economics Inc. This data set contains a variety of macroeconomic forecasts made by public and private economic institutions, mostly banks and research institutes. Starting in October 1989, the survey has been conducted at a monthly frequency in a growing number of countries. The survey process is the same in all countries: during the first two weeks of each month the forecasters send their responses and the data are published in the middle of each month. Thus, when making their forecasts the panelists are likely to be aware of each of their competitors’ forecasts from one month ago.

Because it covers a large number of countries (and variables) the data set has been used in a number of empirical studies, among others by Loungani (2001), Isiklar et al. (2006), Batchelor (2007), Ager et al. (2009), Loungani, Stekler, and Tamirisa (2011), Gallo et al. (2002), Lahiri and Sheng (2008), Dovern and Weisser (2011) and Dovern, Fritsche and Slacalek (2012). Only the last four studies, however, make use of the fact that the data set provides all individual forecasts of the panel of forecasters for each country in addition to the central forecast tendency, which has been used in the other studies.

Due to the fact that Consensus Economics Inc. asks the forecasters to report their forecasts for the annual GDP growth rates of the current and the next calendar year, the data set has a three-dimensional panel structure of the kind formalized in Davies and Lahiri (1995). For each target year, the data set contains a sequence of 24 forecasts of each panelist made between January of the year before the target year and December of the target year.

We include all countries in our sample, for which Consensus Economics Inc. reports individual forecasts. We include only those forecasters that reported their growth forecasts at least 10 times. The data were retrieved directly from Consensus Economics Inc. and cleaned in the following way. First, since forecasters are not identified by a unique ID in the data set but by (sometimes different versions of their) names, we concatenated those forecast series that belong to a single forecaster who shows up under different names (e.g. we treat forecasts corresponding to “Mortgage Bankers Assoc”, “Mortgage Bankers” and “Mortgage Bankers Association” as coming from the same forecaster). Second, we also kept the continuity of forecast series when a forecaster has been subject to a merger or acquisition and it is evident from the forecasts which of the forecaster in question did continue to produce the forecasts after the merger (e.g., we treated forecasts corresponding to “First Boston”, “CS First Boston”, “Credit Suisse First Boston” and “Credit Suisse” as coming from the same forecaster). The other forecaster involved in the merger or acquisition is assumed to leave the panel after the merger.

In total, we end up with 188,639 individual forecasts from 30 different countries, of which 104,894 are from 14 advanced economies (Table 1). The forecasts are made for target years between 1989 and 2011 with the number of observations increasing towards the end of the sample as more and more countries were covered by the survey and the average number of panelists per country increased. On average, our data set includes nearly 16 individual forecasts per period for each country. The forecasts seem to have a tendency to slightly overestimate growth in the emerging economies when measured against the current data vintages for GDP growth. (Real-time data vintages are not available for all countries in the sample.)

Table 1.

Basic Features of Forecast Data

article image
Source: Authors’ estimates.

As expected, the average root mean squared forecasts error (RMSFE) declines with the forecast horizon (Figure 1). In other words, forecast errors become smaller towards the end of the target year (as the horizon, h, approaches 1). RMSFEs for emerging economies are, on average, more than twice as high as for advanced economies for large forecast horizons and still almost 75 percent higher at the end of the target year.

Figure 1.
Figure 1.

Root Mean Squared Forecast Errors over Forecast Horizons

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: h refers to the forecast horizon.

The size of forecast revisions evolves differently over the forecast horizons for advanced economies and emerging economies (Figure 2). Though the relationships are not monotonic for both country groups, their patterns differ. For advanced economies, the revisions are larger around the turn of the year than at very earlier and very late forecast horizons— and in general the average size of the revisions does not vary much with the forecast horizon. For emerging economies, revisions are much smaller for very long forecast horizons and much higher during the target year (h<=12). At the end of the target year (h=1) the average revision in emerging economies is about twice as large as for advanced economies. The latter indicates that in emerging economies uncertainty about the actual data is substantially higher than in advanced economies just before the end of the forecasting horizon, possibly owing to lags in statistical data collection and poor quality of initial data releases.

Figure 2.
Figure 2.

Mean Absolute Revisions over Forecast Horizons

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: h refers to the forecast horizon.

The distribution of forecast revisions shows that forecasts are frequently changed only little (as indicated by the high density around zero, Figure 3). The distributions at all horizons follow a neat unimodal bell-shaped distribution. The distribution of revisions is more flattened out for emerging economies than for advanced economies; here, forecasts seem to be revised less frequently but revisions are larger.

Figure 3.
Figure 3.

Distribution of Forecast Revisions

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: h refers to the forecast horizon. For each case, revisions are computed over k=3 months.

The data show a surprising difference in the skewness of the revision distributions for emerging economies and advanced economies: whereas the distributions have a significantly negative skew for most horizons in the case of advanced economies, the distributions are symmetric for large forecast horizons and positively skewed for smaller forecast horizons in the case of emerging economies, i.e., in the case of emerging economies there is a tendency for negative revisions to be more frequent but smaller than upward revisions— for advanced economies this is the other way round.

Forecasts become more clustered as the forecast horizon shrinks (Table 2). Deviations from the average forecast follow a unimodal distribution for all forecast horizons with most of the forecasts being close to the average forecast (Figure 4). For the advanced economies only very few deviations are larger than half a percentage point. In contrast, the dispersion is considerably larger for the emerging economies, where the data show a considerable degree of disagreement across forecasters even at the end of the target year (h=1). Again, this is a reflection of the fact that uncertainty about the actual data release is substantially larger here than in advanced economies.

Table 2.

Revisions and Deviations from the Average Forecast

article image
Source: Authors’ estimates.
Figure 4.
Figure 4.

Distribution of Deviation of Individual Forecasts from Average (Consensus)

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: h refers to the forecast horizon.

IV. Empirical Findings

A. Average Forecasts

The left-hand side of Table 3 provides the results of estimating equation (5) on average forecasts. As stated above, we choose k=3 as the horizon over which revisions are calculated. This is in line with a quarterly frequency of updating forecasts. (Using one-month horizon results in many zero values.) The horizons we pick for our estimations are h=1, 4, 7, 10, 13 and 16. (Our results are robust to the choice of horizons; that is, for example, if we pick h=2, 5, 8 etc.).

Table 3.

Information Rigidity and Forecast Smoothing

article image
Source: Authors’ estimates.Note: Numbers below the coefficients are t-statistics. Asterisks indicate the degree of significance of coefficients: *** 1 percent, ** 5 percent, and * 10 percent. Regressions include a fixed effect for each country for average forecasts and a fixed effect for each forecaster for individual forecasts; the constants are identified by restricting the sum of all fixed effects to equal 0. The ratio of coefficients on past revisions is defined as the quotient of the baseline rigidity parameter for individual revisions and the equivalent for the revisions of the average forecast. Results are obtained by skipping the forecast data made in December 2008 for the growth rate of 2009, which are heavily driven by the adjustment of forecasts to the progression of the Great Recession. Including these observations leads to an incease of the effect of "Past revision*Horizon 13" to about 0.84 (for both estimators). This would imply a total rigidity parameter of above 1, which is not consistent with any theory of informational rigidities.

We find strong and consistent evidence of information rigidities in consensus forecasts. There is a strong positive correlation between the current forecast revision and its first lag for all country groups and for both estimation methods. Coefficients on lagged revisions are highly statistically significant in all cases.

The extent of information rigidities appears to be broadly similar in forecasts for advanced and emerging economies. The coefficient on lagged revisions for emerging economies at very short forecast horizons is 0.41 compared to 0.37 for advanced economies.

Information rigidities tend to be larger around the turn of the years, i.e., at forecast horizons between 13 and 10 (Figure 5). Coefficients on interactions terms between lagged revisions and the horizon-indicator function are positive and statistically significant for horizons 10 (emerging economies) and 13 (advanced economies) respectively.4 For other horizons the additional effects are much smaller and not significantly different from zero. These results are broadly robust with respect to the estimation method, not only in the sign and statistical significance of coefficients but also with respect to the coefficients’ size.

Figure 5.
Figure 5.

Informational Rigidities at Different Forecast Horizons (Consensus)

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: h refers to the forecast horizon.

Possible explanations for this pattern could relate to the fact that the quarter-on-quarter growth rates for the last quarter of a year have a particularly large effect on the annual growth rate of the following year. There may also be institutional or behavioral explanations where forecasters switch focus from the current year forecasts to the next year forecasts around the turn of the year.

When we do not condition on the length of the forecast horizon, the degree of informational rigidity estimated with our specification based on the average forecast revisions is equal to 0.5 for both advanced and emerging economies. Given that we measure revisions at a quarterly frequency, these estimates imply in the sticky information framework that forecasters update their forecasts about every six months on average. This is a higher updating frequency than it is found in other papers estimating sticky information models based on aggregate expectation data for smaller sets of countries (e.g. Mankiw and Reis, 2002, Khan and Zhu, 2006, Döpke et al., 2008). Analogously, for the imperfect information framework the estimates imply a weight of about 0.5 assigned to past forecasts in the construction of the current forecasts (see equation (4)). This is considerably higher than the estimate of 0.14 presented in Coibion and Gorodnichenko for the United States (2012, p. 143).

B. Individual Forecasts

Information Stickiness

Next, we measure the frequency of forecast updating from individual data. The share of forecasters who chose to update their forecasts at least once in the three months prior to a given forecast horizon ranges between 0.8 and 0.9 over the forecast horizons (Figure 6). This shows that most forecasters choose to update their forecasts quite frequently.5 These estimates are close to those obtained by Andrade and Le Bihan (2010) for the European Survey of Professional Forecasters.

Figure 6.
Figure 6.

Fractions of Revised Individual Forecasts

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Source: Authors’ estimates.Note: Fractions show how many forecasters on average revised their forecasts at least once three months prior to the forecast horizon indicated on the horizontal axis.

Average fractions for advanced economies tend to be higher than those for emerging economies, suggesting that forecasts for advanced economies are revised more frequently than those for emerging economies. This finding of less prevalent updating of emerging economy forecasts is consistent with a larger fraction of zero updates and flatter distributions of revisions for forecasts for emerging economies (Figure 3).

There is a slight tendency that the share of forecasters that update their forecasts increases as the forecast horizon shrinks for both country groups. In addition, there is a hump around the turn of the year, i.e., at about h=13 for most countries. This is consistent with the basic statistical evidence on the pattern of the size of revisions over forecast horizons shown above.

Obviously, these results imply a higher frequency of updating than suggested by the regressions based on average forecast data shown before—and hence a smaller role of sticky information in explaining the overall degree of information rigidity in economic forecasts. Fractions obtained from individual forecast data are considerably higher than the implied estimates for the share of forecasters that updates their forecasts each quarter from the previous section. The coefficients on lagged revisions estimated using average forecast data yield an estimate for the probability to update a forecast in a given quarter of only about 0.06 to 0.75 (depending on the forecast horizon) —compared to 0.8-0.9 based on the individual data.

On the contrary, a high share of forecasters who update their forecasts is perfectly consistent with the theory of imperfect information. In fact, in its pure form the theory actually predicts that all forecasters continuously update their forecasts. But Andrade and Le Bihan (2010) demonstrate that the friction introduced by the usual convention to round published forecasts to the first digit results in a plausible estimate of this share of about 0.8 to 0.9. Actually, their simulations predict that the share should be smaller for long forecast horizons than for short-term forecasts, which is in line with our estimates. Thus, our findings are broadly consistent with the theory of imperfect information but they provide evidence against the theory of sticky information.

Forecast Smoothing

Regression analysis shows strong evidence of forecast smoothing in individual forecasts. The right-hand side of Table 3 reports results of estimating equation (6) using the individual forecast data. The coefficient on the lagged revision (which, as discussed in Section II, provides a measure of general forecast smoothing rather than an exact mapping to the existing information theories) is positive and statistically significant in all specifications.

Thus, while the results of the previous section suggest that informational stickiness is not a big issue in our data set, these estimates imply that individual forecasters smooth their forecasts due to other factors. The degree of rigidity is estimated, however, to be smaller than that for the consensus forecasts. The magnitude of the difference is given in the row labeled “Ratio of Coefficients on Past Revisions”, which shows the ratio of the coefficient on lagged revisions estimated on individual forecasts to that estimated on average forecasts; the estimate of persistence in forecast revisions is about halved. This suggests that the process of averaging forecasts induces additional stickiness.

Like with consensus forecasts, we find differences in the extent of smoothing in forecasts for advanced and emerging economies. Coefficients on lagged revisions are higher in the case of emerging economies (0.23 versus 0.13 for OLS regressions, for example). These results are consistent with graphical evidence discussed in Section III (Figure 4) and suggest that information rigidities are more pronounced in forecasts for emerging economies, possibly owing to greater lags in data releases and weaker quality of economic statistics.

Also similar to the regressions based on average forecasts, those based on individual forecast data suggest that forecast smoothing is non-monotonic over forecast horizons. For advanced economies, coefficients on interactions terms between lagged revisions and horizon variables are strongly positive for horizons 7, 10 and 13; the largest size of the coefficients on the interaction terms is obtained at h=13. ABB standard errors indicate that these effects are statistically different from zero; looking at the more conservative Driscoll-Kraay standard errors of the OLS regression, however, leads to the opposite conclusion for most cases. For emerging economies, the results pertaining to the interaction terms are even weaker; only the coefficient on the interaction term for horizon 10 is strongly positive and statistically significant when looking at the ABB standard errors. Overall, the conclusion is that while forecast smoothing at the individual level increases somewhat at the medium-range forecast horizons, the evidence for the horizon effect is much smaller than in the regressions based on aggregate forecast data.

Looking closer at the distribution of forecast persistence across countries reveals a substantial variation. Figure 7 shows the distribution of the estimated parameters across countries (based on separate regressions for each individual country) for consensus forecasts and for the individual data (neglecting any horizon effects). Consistent with the panel-based finding that the degree of rigidity is less pronounced in the individual data than in the consensus data, the mean of the distribution is lower for the individual data. Also, the mean for emerging economies is higher than that for advanced economies, confirming that forecast smoothing is stronger on average in emerging economies.

Figure 7.
Figure 7.

Distribution of Information Rigidity Coefficients across Countries

Citation: IMF Working Papers 2013, 056; 10.5089/9781475562958.001.A001

Note: Density estimates based on the different estimates for all countries in the sample using a Gaussian kernel and a bandwidth of 0.08.

C. Nonlinearity

Our estimations of the threshold models show the presence of nonlinearities in forecast smoothing. Looking at individual forecast revisions, forecast smoothing is less pronounced in the tails of the distribution of revisions than in the main body of the distribution (Table 5). Coefficients on past revisions in the 90th and 10th percentiles of the distribution have negative signs and are statistically significant for both country groups. The effect in the 10th percentile is much larger than in the 90th percentile for emerging economies. This means that in these countries forecast smoothing declines by more after large downward revisions than after large positive revisions, i.e. positive news shocks are fully incorporated into forecasts more slowly than negative news. On the contrary, the effects related to nonlinearities depending on the relative position of forecasts in the distribution of all available forecasts turn out to be not significant; neither do we find an additional effect for forecasts that are far out in the tails of the distribution of all forecasts nor do we find such an effect for forecasts close to the average forecast.

At the aggregate level, the estimates show that the degree of stickiness is significantly lower after large downward revisions of the average forecast. No such effect is found for the other side of the distribution of revisions. This suggests that updating of forecasts is more synchronized during downturns than in “normal” times so that in these times less persistence of aggregate revisions is caused by staggered processing of news across forecasters. All results of this section are broadly robust when regressions are estimated using the ABB-method.

V. Conclusion

This paper has provided evidence on the dynamics of forecasts of real GDP growth using a large panel data set of individual forecasters in 30 advanced and emerging market economies for target years between 1989 and 2011. The data set used in the paper is far larger than any panel of individual forecasts used in the previous literature, and it covers a wide range of different countries.

Previous work has documented that forecasts are characterized by a significant degree of smoothing or rigidity (Nordhaus, 1987; Coibion and Gorodnichenko, 2010), and a number of theories have been offered for explaining this phenomenon. In particular, Coibion and Gorodnichenko show that finding a correlation between forecast errors and past forecast revisions is consistent with two of the leading explanations for forecast smoothing, viz., the sticky information model and the “noisy” information model.

Using an equivalent test of forecast revisions on past forecasts revisions, we confirm the finding of persistence in average forecast revisions. We also contribute three novel perspectives on forecasters’ behavior, drawing on our large set of individual forecasts for advanced and emerging countries:

First, we provide evidence against the usefulness of the sticky information model to describe the dynamics of professional growth forecasts. We show that the estimates of informational rigidity based on consensus (average) forecasts overstate the true degree of forecasters’ inattentiveness. When consensus forecasts are used, which has been the common practice in previous studies, estimates suggest that forecasts are updated on average every 6 months. Our analysis of fractions of forecasters who update their forecasts, however, points to a higher frequency of updating. The evidence based on fractions, hence, suggests a small role of sticky information in explaining the overall degree of information rigidity in economic forecasts. Likewise, the predictability of individual forecast revisions casts doubt on the validity of the sticky information theory, which implies that revisions are unpredictably (though infrequent).

Second, we find that information rigidities are more pronounced in emerging economies than advanced economies. Possible explanations for this finding are greater uncertainty about their cyclical positions and transmission of shocks, a weaker quality of economic and financial statistics, or fewer resources devoted to monitoring these economies and producing up-to-date forecasts for them.

Finally, there is evidence of nonlinearities in forecast smoothing. It is less pronounced in the tails of the distribution of individual forecast revisions than in the main body of the distribution.

Many interesting issues are left for the future research. In particular, herding, another prominent feature of forecasters’ behavior and its interaction with forecast smoothing deserve a closer look. Further topics that are worth being explored are implications of uncertainty for the dynamics of macroeconomic forecasting and the evolution of forecast rigidities over the business cycle.

Table 4.

Nonlinear Effects in Forecast Rigidities

article image
Source: Authors’ estimates.Note: The model is estimated using OLS. T-statistics are based on Driscoll-Kraay standard errors. Numbers below the coefficients are standard errors. Asterisks indicate the degree of significance of coefficients: *** 1 percent, ** 5 percent, and * 10 percent. The model includes a fixed effect for each individual panelist; the constant is identified by restricting the sum of all fixed effects to equal 0. Estimates of horizon specific effects (analogous to those in Table 3) are suppressed.

References

  • Ager, P., M. Kappler, and S. Osterloh, 2009, “The Accuracy and Efficiency of the Consensus Forecasts: A Further Application and Extension of the Pooled Approach,International Journal of Forecasting, 25(1), 167181.

    • Search Google Scholar
    • Export Citation
  • Andrade, P., and H. Le Bihan, 2010, “Inattentive Professional Forecasters,Banque De France Working Paper No. 307 (December).

  • Arellano, M. and S. Bond, 1991, “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations,Review of Economics Studies, 58, 277297.

    • Search Google Scholar
    • Export Citation
  • Arellano, M. and O. Bover, 1995, “Another look at the instrumental variable estimation of error-components models,Journal of Econometrics, 68, 2951.

    • Search Google Scholar
    • Export Citation
  • Batchelor, R., 2007, “Bias in macroeconomic forecasts,” International Journal of Forecasting, Elsevier, 23(2), 189203.

  • Benabou, R., 2009, “Groupthink: Collective Delusions in Organizations and Markets,NBER Working Paper No. 14764, March.

  • Coibion, O. and Y. Gorodnichenko, 2010, “Information Rigidity and the Expectations Formation Process: A Simple Framework and New Facts,NBER Working Papers, 16537, National Bureau of Economic Research, Inc.

    • Search Google Scholar
    • Export Citation
  • Coibion, O. and Y. Gorodnichenko, 2012, “What Can Survey Forecasts Tell Us about Information Rigidities?,Journal of Political Economy, 120(1), 116159.

    • Search Google Scholar
    • Export Citation
  • Crowe, C., 2010, “Consensus Forecasts and Inefficient Information Aggregation,IMF Working Paper, WP/10/178, July.

  • Davies, A. and K. Lahiri, 1995, “A new framework for analysing survey forecasts using three-dimensional panel data,Journal of Econometrics, 68, 205227.

    • Search Google Scholar
    • Export Citation
  • Döpke, J., J. Dovern, U. Fritsche, and J. Slacalek, 2008, “Sticky information Phillips curves: European evidence,Journal of Money, Credit, and Banking, 40(7), 15131519.

    • Search Google Scholar
    • Export Citation
  • Dovern, J. and J. Weisser, 2011, “Accuracy, unbiasedness and efficiency of professional macroeconomic forecasts: An empirical comparison for the G7,International Journal of Forecasting, 27(2), 452465.

    • Search Google Scholar
    • Export Citation
  • Dovern, J., U. Fritsche, and J. Slacalek, 2012, “Disagreement among Forecasters in G7 Countries,Review of Economics and Statistics, 94(4), 10811096.

    • Search Google Scholar
    • Export Citation
  • Dovern, J., U. Fritsche, P. Loungani, and N. Tamirisa, forthcoming, “Are Informational Rigidities of Macroeconomic Forecasts Influenced by the Business Cycle?,unpublished manuscript.

    • Search Google Scholar
    • Export Citation
  • Driscoll, J.C., and A. C. Kraay, 1998, Consistent Covariance Matrix Estimation with Spatially Dependent Panel Data, Review of Economics and Statistics, 80, 549560.

    • Search Google Scholar
    • Export Citation
  • Gallo, G.M., C.W.J. Granger, and Y. Jeon, 2002, “Copycats and Common Swings: The Impact of the Use of Forecasts in Information Sets,IMF Staff Papers, 49(1), 421.

    • Search Google Scholar
    • Export Citation
  • Isiklar, G., K. Lahiri, and P. Loungani, 2006, “How Quickly Do Forecasters Incorporate News?Journal of Applied Econometrics, 21, 703725.

    • Search Google Scholar
    • Export Citation
  • Isiklar, G., and K. Lahiri, 2007, “How Far Ahead Can We Forecast? Evidence from Cross-Country Surveys,International Journal of Forecasting, 23(2), 167187.

    • Search Google Scholar
    • Export Citation
  • Khan, H., and Z. Zhu, 2006, “Estimates of the Sticky-Information Phillips Curve for the United States,Journal of Money, Credit & Banking, 38(1), 195207.

    • Search Google Scholar
    • Export Citation
  • Lahiri, K. and X. Sheng, 2008, “Evolution of Forecast Disagreement in a Bayesian Learning Model,Journal of Econometrics, 114(2), 325340.

    • Search Google Scholar
    • Export Citation
  • Loungani, P., 2001, “How Accurate Are Private Sector Forecasts? Cross-country Evidence from Consensus Forecasts of Output Growth,International Journal of Forecasting, 17, 419432.

    • Search Google Scholar
    • Export Citation
  • Loungani, P., H. Stekler and N. Tamirisa, 2011, “Information Rigidity in Growth Forecasts: Some Cross-Country Evidence,IMF Working Paper No. 11/125, International Monetary Fund, Washington DC.

    • Search Google Scholar
    • Export Citation
  • Mankiw, G. and R. Reis, 2002, “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve,Quarterly Journal of Economics, 117(4), 12951328.

    • Search Google Scholar
    • Export Citation
  • Nickell, S. (1981), “Biases in Dynamic Models with Fixed Effects,Econometrica, 49(6), 14171426.

  • Nordhaus, W., 1987, “Forecasting Efficiency: Concepts and Applications,The Review of Economics and Statistics, 69, 667674.

  • Reis, R., 2006, “Inattentive Producers,Review of Economic Studies, 73(3), 793821.

  • Sims, C., 2003, “Implications of Rational Inattention,Journal of Monetary Economics 50(3), 665690.

  • Tversky, A. and D. Kahneman, 1981, “The Framing of Decisions and the Psychology of Choice,Science, 211(4481), 453458.

  • Woodford, M., 2002, “Imperfect Common Knowledge and the Effects of Monetary Policy,” published in P. Aghion, R. Frydman, J. Stiglitz, and M. Woodford, eds., Knowledge, Information, and Expectations in Modern Macroeconomics: In Honor of Edmund S. Phelps, Princeton University Press.

    • Search Google Scholar
    • Export Citation
1

Jonas Dovern is at Kiel Economics Research & Forecasting GmbH & Co. KG, Ulrich Fritsche in Hamburg University, and Prakash Loungani and Natalia Tamirisa are at the International Monetary Fund. The authors thank Angela Espiritu and Jair Rodriguez for excellent research assistance. They are also grateful to Oli Coibion and Yuriy Gorodnichenko for insightful suggestions on an earlier draft, as well as participants in the 2012 George-Washington-University-IMF Forecasting Forum, the 2011 NBER Summer Institute, the 2011 International Symposium on Forecasting, the 2012 Econometric Society’s Australasian Meeting, and seminars at the George Washington University and Hamburg University for helpful comments. This paper should not be reported as representing the views of the IMF. The views expressed in this paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy.

2

This value for k is a reasonable choice to balance a trade-off between losing too much of the high frequency dynamics (for larger values of k) against sampling too many “zero-revisions” due to the fact that some forecasters update their forecasts only quarterly (for lower values of k).

3

Note that in the original version of the sticky information framework (Mankiw and Reis, 2002) the degree of forecast rigidity is assumed to be an exogenously given constant.

4

As noted in the footnote to Table 3, if we do not neglect forecasts made in December 2008 for the growth rate of 2009, which are heavily driven by the adjustment of forecasts in the aftermath of Lehman collapse, we even obtain an estimate of the total degree of informational rigidity (for advanced economies and h=13) larger than 1. This is not consistent with informational rigidity theories and an extreme demonstration of the fact that the predictability of aggregate revisions tends to increase during recessions. The topic of information rigidities and uncertainty is explored in more detail in a companion paper (Dovern et al., forthcoming).

5

Though, often they change it very little as shown in Section III.

Information Rigidities in Economic Growth Forecasts: Evidence from a Large International Panel
Author: Jonas Dovern, Mr. Ulrich Fritsche, Mr. Prakash Loungani, and Ms. Natalia T. Tamirisa