Two groups of methods may be used to estimate potential output. One group is based on statistical techniques that attempt to decompose a time series into the permanent and cyclical components; the other group is based on estimating structural relationships, which in turn are usually obtained from economic theory. Potential output is estimated for Malaysia using the cubic spline-smoothing method where the result indicates that the output gap would close toward end-2000 or early 2001. Thereafter, inflation is forcast using the estimated output gap as one of its determinants.
Introduction
Potential output is an important input to macroeconomic policy design, for practical purposes, potential output can be defined as the maximum output an economy can sustain without generating an increase in inflation. The output gap is simply the difference between actual and potential output. In the short run, estimates of the output gap provide a key indicator of inflationary pressures. This section estimates potential output for Malaysia using various approaches applied to data through end-1999; then inflation is forecast using the estimated output gap as one of its determinants. The different approaches used are briefly outlined,1 an estimate of potential output in Malaysia and the implied output gap are provided, followed by an inflation equation estimated using the output gap.
To summarize, of the several approaches available, the potential output estimated using the cubic spline-smoothing method appears to provide the most plausible result, expost, in terms of the implied output gap. Estimates suggest the gap is expected to close in late 2000 or early 2001, implying that inflationary pressures could start building up in the near future. The estimated inflation equation confirms that inflation would rise above 3 percent (year-on-year) during 2000 if the gap were to close.2
Measuring Potential Output
The concept of potential output is not well defined and is difficult to measure. The literature broadly suggests two definitions (see Scacciavillani and Swagel, 1999). The first arises from the assumption that the business cycle results mainly from movements in aggregate demand in relation to a slowly moving level of aggregate supply. This occasions (potentially substantial) swings during which there are overutilized/underutilized resources.
The second definition follows the neoclassical tradition, where potential output is assumed to be driven by exogenous productivity shocks to aggregate supply that determine long-run growth and, to a large extent, short-term fluctuation in output over the business cycle.3 According to the latter approach, output fluctuates around its potential level but generally without wide or prolonged divergence.
Potential output is an unobservable variable, making it difficult to measure. In practice, there are a plethora of techniques, each with advantages and drawbacks, and no single methodology dominates. These techniques could be classified broadly into two groups.
The first group of methods is purely statistical and attempts to decompose a time series into permanent and cyclical components. This category would include filtering methods (moving averages, the Hodrick-Prescott (HP) filter, the Beveridge-Nelson method, Kernel estimators, and spline-smoothing techniques) and unobservable component methods (both univariate and multivariate approaches). Most of these approaches derive the permanent component as a trend by minimizing the distance between the points on the trend and the actual value at prespecified intervals, while penalizing frequent changes of the second moments of the estimated trend in order to ensure some degree of smoothness of the estimated trend. Although these approaches are attractive, in that they require considerably less data than other methods, they become ill-defined at the beginning and end of samples, or if there is a structural break in the data.
The second group of methods is based on estimating structural relationships, including production functions, multivariate systems of equations, structural vector autoregressions (VARs), and “demand-side” models. The production function methodology represents the middle ground between a full-scale structural model and the various univariate approaches (such as filters and unobservable components). Even then, this approach is considerably more demanding in terms of data requirements than filtering approaches, and the nature of inputs data–particularly the capital stock calculations–implies significant potential measurement errors in inputs.4 The demand-side approach relates output directly to measures of spare capacity in the economy or supply-side measures. The structural VAR approach combines aggregate supply (and supply shocks) and cyclical fluctuations with changes in aggregate demand using a structural VAR with restrictions imposed on the long-run effects of impacts on output and unemployment. These long-run restrictions are used to identify structural supply and demand shocks by allowing supply shocks to have a permanent effect on output, while demand shocks are assumed to have only a temporary effect on output.5
Estimating the Output Gap
Of the approaches outlined above (and described more fully in the Appendix), a univariate filtering technique, namely the cubic spline-smoothing method (CSSM), was chosen to estimate the potential output for Malaysia. The main reason for selecting this approach was its simplicity, especially relating to data requirements. A production function approach was also tried, but it did not indicate an improvement over results obtained from filtering techniques. The CSSM was preferred over other filtering techniques as it rendered the most plausible potential output path in terms of the implied output gap, especially in relation to actual inflation.6
There are two weaknesses that need to be addressed using any of the smoothing methods: the structural break in the GDP series, due to the sharp decline in output in 1998; and the end-period problems. Given the structural break in 1998, smoothing methods tend to underweight the GDP growth in 1997, implying a positive output gap, even though there was no evidence of overheating (e.g., no inflationary pressures were noted in 1997). Furthermore, the end-point of actual data, i.e., end-1999, which is still a point in the recovery path, tends to bring down the estimated potential output such that the estimated output gaps in 1998–99 are small. In view of the magnitude of output decline in 1998 with no apparent physical damage or dramatic shift in the structure of demand that would make existing capital redundant on a permanent basis, the low output gap during the postcrisis period does not appear to be consistent with the actual unutilized productive capacity of the economy.
To address these issues, an interpolated point was used for 1998 (instead of actual), and the sample period was extended to 2002 using the 1998 IMF World Economic Outlook (WEO) forecast.7 Annual data were used for the period covering 1970–2002. The estimated output gap, adjusted for these two weaknesses (Adj. SP), is compared with the gap estimated without the interpolated point (Figure 3.1) in 1998 using the CSSM (SP), the HP filter (with values of ʎ=7 (HP7) and ʎ=100 (HP100)), the Kernel smoothing method (KN). and the production function (PF). As is shown in Table 3.1. except for the adjusted CSSM, other smoothing methods tend to overstate the degree of overheating during the precrisis period while understating the output gap during the postcrisis period. The degree of over-and underestimation is made worse by the extension of the end-period through 2003.
Comparing Estimates of Output Gap With (Adj. SP) and Without (SP) the Interpolated Point in 1998
(Changes in Log (Index))
Estimates of Output Gap, 1996–2001
Estimates of Output Gap, 1996–2001
Adj. SP | SP | HP7 | HP100 | KN | PF | |
---|---|---|---|---|---|---|
1996 | 0.04 | 1.62 | 4.17 | 7.45 | 0.47 | 5.47 |
1997 | 0.97 | 4.93 | 6.69 | 9.10 | 4.45 | 5.49 |
1998 | -9.39 | -4.25 | -4.70 | -3.87 | -3.98 | -6.21 |
1999 | -1.94 | -1.32 | -3.06 | -3.47 | -0.26 | -4.31 |
2000 | -0.86 | 0.23 | -1.36 | -2.31 | 0.05 | … |
2001 | 0.15 | 0.20 | -0.59 | -1.45 | -0.18 | … |
Estimates of Output Gap, 1996–2001
Adj. SP | SP | HP7 | HP100 | KN | PF | |
---|---|---|---|---|---|---|
1996 | 0.04 | 1.62 | 4.17 | 7.45 | 0.47 | 5.47 |
1997 | 0.97 | 4.93 | 6.69 | 9.10 | 4.45 | 5.49 |
1998 | -9.39 | -4.25 | -4.70 | -3.87 | -3.98 | -6.21 |
1999 | -1.94 | -1.32 | -3.06 | -3.47 | -0.26 | -4.31 |
2000 | -0.86 | 0.23 | -1.36 | -2.31 | 0.05 | … |
2001 | 0.15 | 0.20 | -0.59 | -1.45 | -0.18 | … |
The output gap, estimated using the production function, does not indicate an improvement over results obtained from filtering techniques. This reflects in part the fact that smoothing techniques were used to derive the potential labor force (i.e., at a nonaccelerating inflation rate of unemployment (NAIRU) consistent level and the trend total factor productivity (TFP) growth. This implies shifting the problems associated with estimation of trend GDP one layer down to estimation of trend labor force and total factor productivity growth.8
Determinants of Inflation
Studies have used various approaches to modeling inflation, depending on the structure of the country and objectives of the analysis. Approaches from the supply side include markup models, where the general domestic price level is estimated as a markup over total unit costs, including labor costs, import prices, and energy prices (see de Brouwer and Ericsson, 1998). Another approach centers around a money market equation, sometimes together with an explicit equation for purchasing power parity (PPP) (e.g., Jonsson, 1999). In other studies, the external sector disequilibrium pressure is measured using the difference between actual and estimated equilibrium exchange rates (Lim and Papi, 1997), Single-equation models include some form of expectation-augmented Phillips curve (Razzak, 1995: and Stock and Watson, 1999). A more formal structural approach involves various combinations of the above, where, for example, Chhibber and others (1989) introduce PPP (for traded goods), markup (for nontraded goods), and allowances made for controlled prices.
The approach adopted in this section is eclectic and tries to capture the key sources of inflationary pressures in Malaysia. The pressure exerted on prices by excess domestic demand or supply shocks (through the output gap) is dampened to the extent that some of the pressure is released through adjustments in imports. Changes in imports as a percent of GDP, therefore, would play a role in determining inflation. Furthermore, the PPP condition is introduced to accommodate the impact of foreign prices and the exchange rate on domestic inflation, Specifically:
where p = domestic price; gap = output gap: img = imports as a share of GDP; ex = nominal effective exchange rate; and wp = foreign price.
The estimation methodology is the familiar maximum likelihood cointegration procedure by Johansen. The long-run relationship is estimated using the VAR on p img, ex, and wp. The short-run dynamics are estimated using an error-correction mechanism (ECM), in which we introduce the output gap, and inflation is forecast 12 months ahead (through 2001Q1). Inflation is projected to reach 3½ percent to 4 percent (year-on-year by the first quarter of 2001, although, given the short observation period, the result obtained needs to be interpreted with caution.
Data
Data are quarterly, covering 1991 (Q1) through 2000 (Q1), although the ECM is estimated for 1991–99.
-
p: domestic price is defined as the logarithm of the consumer price index. As there are no detailed breakdowns of the consumer price index for the specified period, no adjustments are made for transitory fluctuations or rigidities that are controlled or influenced by the government.
-
img: the ratio is defined as merchandise imports to GDP.
-
ex: The nominal effective exchange rate index is defined as the log of the nominal effective exchange rate and is derived from trading partners weighted by their relative trade share(IMF IFS)
-
wp: foreign price is defined as the log of an index derived as a composite of price indices of the United States (30 percent), Japan (20 percent), Singapore (20 percent), Germany (15 percent), and the United Kingdom (15 percent) (IMF, IFS).
-
gap: output gap is defined as actual output divided by potential output. The potential output is obtained using spline-smoothing allowing for the structural break during parts of 1998 and 1999 and using the same approach as elaborated above, but using quarterly data. Hence, gap <1 implies excess capacity.
Integration and Cointegration
Statistical tests indicate that all variables are integrated of order two or lower. In particular, according to the Augmented Dickey-Fuller (ADF) statistic, p appears to be I(1), and the other three variables appear to be I(2), However, their estimated coefficients are numerically much less than unity, i.e., the coefficients of Δimg, Δex, and Δwp are (-0.14=1-1.14), (-0.04=1-0.96), and (0.33 = 1-0.67), respectively. Thus, all four variables are treated below as if they are I(1).9
The Johansen maximum likelihood estimation procedure for finite-order VARs is used to obtain the long-run relationship between integrated variables. As there was no a priori information on the lag order of the VAR, a fourth-order VAR was used and simplified to a first-order VAR on the basis of the results presented in Table 3.2, below.
F and SC Statistics for Sequential Reduction from VAR(4) to VAR(1)
The numbers represent F-statistics for testing the null hypothesis against the maintained hypothesis, and the tail probability associated with that value of the F-statistics (in parentheses).
Number of unrestricted parameters.
Schwartz criterion.
F and SC Statistics for Sequential Reduction from VAR(4) to VAR(1)
Null Hypothesis | Maintained Hypothesis1 | |||||
---|---|---|---|---|---|---|
System | k2 | Log Likelihood | SC3 | VAR(4) | VAR(3) | VAR(2) |
VAR(4) | 68 | 639.5 | -32.60 | |||
VAR(3) | 52 | 622.2 | -33.26 | 0.988 | ||
(0.489) | ||||||
VAR(2) | 36 | 599.1 | -33.54 | 1.409 | 1.873 | |
(0.143) | (0.047) | |||||
VAR(1) | 20 | 583.5 | -34.30 | 1.488 | 1.755 | 1.445 |
(0.086) | (0.030) | (0.151) |
The numbers represent F-statistics for testing the null hypothesis against the maintained hypothesis, and the tail probability associated with that value of the F-statistics (in parentheses).
Number of unrestricted parameters.
Schwartz criterion.
F and SC Statistics for Sequential Reduction from VAR(4) to VAR(1)
Null Hypothesis | Maintained Hypothesis1 | |||||
---|---|---|---|---|---|---|
System | k2 | Log Likelihood | SC3 | VAR(4) | VAR(3) | VAR(2) |
VAR(4) | 68 | 639.5 | -32.60 | |||
VAR(3) | 52 | 622.2 | -33.26 | 0.988 | ||
(0.489) | ||||||
VAR(2) | 36 | 599.1 | -33.54 | 1.409 | 1.873 | |
(0.143) | (0.047) | |||||
VAR(1) | 20 | 583.5 | -34.30 | 1.488 | 1.755 | 1.445 |
(0.086) | (0.030) | (0.151) |
The numbers represent F-statistics for testing the null hypothesis against the maintained hypothesis, and the tail probability associated with that value of the F-statistics (in parentheses).
Number of unrestricted parameters.
Schwartz criterion.
Table 3.3 reports the result of VAR(1) on g, m, img, ex, and wp. As indicated by the Johansen maximal eigenvalue (λmax) and trace eigenvalue (λtrace) statistics, the result rejects the null hypothesis of no cointegration in favor of at least one cointegrating relationship. The rows of the β matrix in Table 3.3 can be interpreted as long-run parameters and the elements in the matrix α as adjustment coefficients (see Charemza and Deadman, 1992).
Cointegration Analysis of Price Data 1991Q2 to 2000Q11
PcFiml 9.0 for Windows was used to estimate the VAR.
Significant at the 95 percent level.
Significant at the 99 percent level.
Cointegration Analysis of Price Data 1991Q2 to 2000Q11
Ho: rank = p | λmax | 95% | λtrace | 95% |
---|---|---|---|---|
P == 0 | 30.02 | 27.1 | 54.93 | 47.2 |
p<=1 | 15.8 | 21.0 | 24.9 | 29.7 |
p<=2 | 6.3 | 14.1 | 9.1 | 15.4 |
P<=3 | 2.9 | 3.8 | 2.9 | 3.8 |
Standardized β coefficients | ||||
p | img | ex | wp | |
1.000 | 0.021 | 0.102 | -1.829 | |
-1.443 | 1.000 | -1.044 | -1.713 | |
-42.747 | -10.621 | 1.000 | 78.069 | |
-0.739 | 0.569 | 0.217 | 1.000 | |
Standardizedα coefficients | ||||
p | -0.236 | 0.016 | 0.001 | -0.009 |
img | -0.554 | -0.051 | 0.011 | -0.186 |
ex | 1.609 | 0.164 | -0.008 | -0.108 |
wp | 0.166 | 0.001 | 0.001 | 0.004 |
PcFiml 9.0 for Windows was used to estimate the VAR.
Significant at the 95 percent level.
Significant at the 99 percent level.
Cointegration Analysis of Price Data 1991Q2 to 2000Q11
Ho: rank = p | λmax | 95% | λtrace | 95% |
---|---|---|---|---|
P == 0 | 30.02 | 27.1 | 54.93 | 47.2 |
p<=1 | 15.8 | 21.0 | 24.9 | 29.7 |
p<=2 | 6.3 | 14.1 | 9.1 | 15.4 |
P<=3 | 2.9 | 3.8 | 2.9 | 3.8 |
Standardized β coefficients | ||||
p | img | ex | wp | |
1.000 | 0.021 | 0.102 | -1.829 | |
-1.443 | 1.000 | -1.044 | -1.713 | |
-42.747 | -10.621 | 1.000 | 78.069 | |
-0.739 | 0.569 | 0.217 | 1.000 | |
Standardizedα coefficients | ||||
p | -0.236 | 0.016 | 0.001 | -0.009 |
img | -0.554 | -0.051 | 0.011 | -0.186 |
ex | 1.609 | 0.164 | -0.008 | -0.108 |
wp | 0.166 | 0.001 | 0.001 | 0.004 |
PcFiml 9.0 for Windows was used to estimate the VAR.
Significant at the 95 percent level.
Significant at the 99 percent level.
The first row of (β, which is the estimated cointegrating vector, represents a long-run relationship between the variables estimated and can be expressed as:
All of the estimated coefficients have the right signs and explain that Malaysia’s price index declines with larger imports as a percent of GDP and with an appreciation of the nominal effective exchange rate, but increases with higher world prices. Having established a long-run relationship, a single equation model is used to assess the short-term behavior of the price level.
The Short-Run Dynamics of Inflation
An unrestricted error-correction model is used to examine the dynamics of inflation in the short run. It incorporates the long-run relationship obtained above as follows:
where i = (0, 1, 2, 3, 4) and j = (1, 2, 3); “res” is obtained from p-p(est), where p(est) is obtained from (2) above; S denotes the seasonal dummies; and D is a dummy variable to capture the structural break during 1998. Again, using OLS, the order of lags are reduced in sequence, and the following variables were retained:10
Δpt–2, Δimgt–3, Δext–2, Δwpt–3, Δgapt, gapt–1, and rest–1.
Using the estimated coefficients as reported in Table 3.4, inflation is forecast for the 12 months through 2001 Q1. The projection indicates an upward trend in inflation, reaching 3½ percent to 4 percent during 2000Q1. The result is presented in Figures 3.2 and Figure 3.3, below.
Result of Inflation Estimation by OLS, 1992Q2–1999Q41
PcGive was used to estimate the equation; “D” represents first difference.
Result of Inflation Estimation by OLS, 1992Q2–1999Q41
The present sample is: 1992 (2) to 1999 (4) | |||||
---|---|---|---|---|---|
Variable | Coefficient | Std. error | t-value | t-prob | Part R2 |
Constant | 0.046055 | 0.027674 | 1.664 | 0.1096 | 0.1075 |
Dp t-2 | -0.26606 | 0.14359 | -1.853 | 0.0768 | 0.1299 |
Dimg t-3 | 0.028043 | 0.016735 | 1.676 | 0.1073 | 0.1088 |
Dex t-2 | -0.018984 | 0.015673 | -1.211 | 0.2381 | 0.0600 |
Dwp t-3 | 0.77974 | 0.24717 | 3.155 | 0.0044 | 0.3020 |
Dgap t | -0.11003 | 0.047982 | -2.293 | 0.0313 | 0.1861 |
Gap t-1 | -0.039224 | 0.027506 | -1.426 | 0.1673 | 0.0812 |
Res t-1 | -0.016748 | 0.083422 | -0.201 | 0.8426 | 0.0017 |
R2 = 0.690712 F(7.23) = 7.3378 [0.0001]\sigma = 0.00348159 DW = 2.53 | |||||
RSS = 0.0O02787939991 for 8 variables and 31 observations |
PcGive was used to estimate the equation; “D” represents first difference.
Result of Inflation Estimation by OLS, 1992Q2–1999Q41
The present sample is: 1992 (2) to 1999 (4) | |||||
---|---|---|---|---|---|
Variable | Coefficient | Std. error | t-value | t-prob | Part R2 |
Constant | 0.046055 | 0.027674 | 1.664 | 0.1096 | 0.1075 |
Dp t-2 | -0.26606 | 0.14359 | -1.853 | 0.0768 | 0.1299 |
Dimg t-3 | 0.028043 | 0.016735 | 1.676 | 0.1073 | 0.1088 |
Dex t-2 | -0.018984 | 0.015673 | -1.211 | 0.2381 | 0.0600 |
Dwp t-3 | 0.77974 | 0.24717 | 3.155 | 0.0044 | 0.3020 |
Dgap t | -0.11003 | 0.047982 | -2.293 | 0.0313 | 0.1861 |
Gap t-1 | -0.039224 | 0.027506 | -1.426 | 0.1673 | 0.0812 |
Res t-1 | -0.016748 | 0.083422 | -0.201 | 0.8426 | 0.0017 |
R2 = 0.690712 F(7.23) = 7.3378 [0.0001]\sigma = 0.00348159 DW = 2.53 | |||||
RSS = 0.0O02787939991 for 8 variables and 31 observations |
PcGive was used to estimate the equation; “D” represents first difference.
Inflation, 1995–2001
(Log (Index), changes)
Appendix. Approaches to Measuring Potential Output
Potential output is defined as the maximum output an economy can sustain without generating an increase in inflation. There are two groups of methods that may be used to estimate the potential output. One group is based on statistical techniques that attempt to decompose a time series into permanent and cyclical components; the other group is based on estimating structural relationships based on economic theory. Below, the various methodologies are outlined and some of their main attributes and draw backs are highlighted. Detailed descriptions of the methodologies can be found in the studies cited.
Statistical Filters and Smoothing Methods
Simple Trends
The simplest method of estimating potential output is the use of a linear time trend. This can be refined by using spline trends to allow for structural breaks or by including a polynomial in the trend term to allow for nonlinearity (for an example, see Bayoumi and others, 1999).
The main attraction of using a simple time trend or variants thereof is its simplicity. Since the publication of an influential paper by Nelson and Plosser (1982), however, which suggested that output series are best characterized as integrated series, there has been increasing recognition that measuring the permanent component of output, i.e., potential output, with any degree of accuracy is a difficult task. In particular, the existence of a stochastic permanent component implies that potential output cannot be treated as a deterministic trend (Dupasquier, Guay, and St. Amant, 1999).11
The Hodrick-Prescott (HP) and Other Filters
Filters and smoothing methodologies can be as simple as a moving average. More complex smoothing techniques include the HP filter, which chooses trend output, y*, such that, for a given parameter λ (which determines the degree of smoothness), the sum of squared deviations of y from y* plus λ times the sum of the rate of change of y* is minimized. Mathematically, trend output. y* is derived for a given λ by:
where Δ2 indicates twice differencing. The larger the parameter λ, the more weight given to smoothness (determined by the second term) versus fit (determined by the first term), For a discussion of the statistical properties of the HP filter, see Cogley and Nason (1995), Harvey and Jaeger (1993), and Söderlind (1994).
-
The HP and other such filters are attractive because considerably less data are required than for other methods such as the production function, unobservable variables, or structural VARs, discussed below. Use of these filters, however, has certain disadvantages.
-
The resultant measure of output depends critically on the choice of the smoothing parameter, λ.12 The smoothing parameter, in effect, determines the “window” of data used to calculate the trend; the larger the smoothing parameter, the broader this window of data used and the smoother the trend.
-
The HP filter can induce spurious cyclically in the smoothed series when the series are integrated or nearly integrated13 (see Harvey and Jaeger, 1993; and Cogley and Nason, 1995).
-
Probably the most serious drawback of filtering methods for policymaking purposes is that they become ill defined at the beginning and end of samples; see, for example, Baxter and King (1995), who recommend discarding three years of quarterly data at both ends of a sample. This is a significant drawback for those policymakers wishing to use the measure of the current Output gap and thus focus on the most recent observations.14
-
The HP and other such filtering methods also lend to neglect structural breaks and regime shifts (Scacciavillani and Swagel, 1999).
The Coe-McDermott Method
Coe and McDermott (1997) use a nonparametric (Kernel) smoothing technique, essentially similar to the HP filter but where the degree of smoothness (in the case of the Kernel estimator, determined by the bandwidth(h), similar to the HP λ in determining the “window” of data used for the smoothing) is determined by the data, hence addressing one of the major problems associated with the HP filter. Details of the methodology can be found in the appendix to Coe and McDermott (1997), and a general discussion of nonparametric techniques is provided by Härdle 1990). The method, however, is still likely to share the other problems of the HP filter.
Running Median Smoothing (RMS)
A simple form of the RMS filter uses a running window on the data, the smoothed value in each period being set equal to the median of the values in the window. The method can be extended to apply multiple passes and to use different sizes of data windows and observation weights. See Scacciavillani and Swagel (1999) for more details and an application of this filter.
The RMS has the advantage of removing the effects of outliers that are not close to the particular smoothed value. It also allows for the possibility of structural change because the window of data that is being smoothed shifts.
Cubic Spline-Smoothing Method (CSSM)
Cubic spline smoothing is popular in curve-fitting applications and for interpolating data (e.g., quarterly data from observed annual data). The cubic spline function partitions the data into N “knots.” In this case, the knots are simply periods. The knots may be all the same length or may differ (e.g., at the beginning and/or end of the period). A cubic spline function, (gZ), is then defined for ith knot, and each cubic spline function is constrained so that where the knots meet, the function values, stopes, and second derivatives are equal (to ensure a smooth and continuous curve). As in the HP filter, the cubic spline measure of potential output requires the specification of λ, which provides the trade off between smoothness and fit. The potential output measure, g(Z) is then derived from the following optimization problem:
The choice of λ determines the polynomial order of the general solution. The choice of λ can be determined by the data in a similar way to Coe and McDcrmott’s approach by choosing λ according to the generalized cross-validation (GCV) criteria, which essentially chooses λ to minimize the out-of-sample forecast error on average.15
The cubic spline approach is a smoothing methodology which, like the other filtering methods, has the advantage of a minimal data requirement. The knots provide a means to allow for structural breaks and minimize the impact of outliers on the estimate of potential output. The variable knot width allows for the possibility of increasing or decreasing the number of observations in the first and last knots. Thus, reducing the size of the last knot might improve the measure of potential output and the output gap for the most recent period. The spline-smoothing estimator can be interpreted as a variable bandwidth kernel estimator.
Wavelets Filters
The wavelets methodology provides a “de-noising” approach to extracting a series for potential output that does not rely on arbitrary assumption regarding the regularity of fluctuations. Rather, the approach maps the observed data into more general functional spaces, the orthogonal bases of which are called “wavelets.” The appendix to Scacciavillani and Swagel (1999) provides an overview of the wavelets theory and a technical discussion of how the filter works, while the main paper provides an application of the wavelets filter to deriving potential output, which is compared with the potential output from a number of other approaches.
Unobservable Components Methods
The unobservable components methods provide a means of estimating unobservable variables, such as NAIRU and potential output, from the observed data on output, inflation, and unemployment. The explicit relationships between the observed and unobserved variables are specified in what is called “state space” form in the measurement or observation equation, which is a general way of representing dynamic systems in which the observed variables are specified in terms of the unobservable variables or state variables. A separate system of equations, the transition equation, specifies the autoregressive processes assumed to generate the state variables. The unobservable variables can then be estimated using what is known as the Kalman filter.16 The unobservable components model has been developed for a univariate case (decomposing output into a stochastic trend and a cyclical component) and multivariate case (relating inflation, the output gap, and the unemployment gap to derive NAIRU and a measure of potential output). For multivariate applications and further details of the methodology, see Apel and Jansson (1999) and Cerra and Saxena (2000). The latter study discusses extensions, such as the inclusion of common permanent components and the possibility of asymmetric growth rates via the use of a latent Markov-switching stale variable.
This method has the very attractive feature of allowing the explicit specification of the relationships between output, inflation, and unemployment, thus providing theory-consistent estimates of potential output and the output gap.17 The requirement of an explicit specification of these relationships, however, means that the measures of potential output and output gap are contingent on this specification.
Also, the method requires explicit assumptions on the form of the data-generating process for the observable variables. For the univariate approach, Quah (1992) has shown that “without additional ad hoc restrictions, those characterizations are completely uninformative for the relative importance of the underlying permanent and transitory components.” The multivariate representations have followed, partly in response to this criticism, but they still maintain the assumption that the permanent component of output behaves like a random walk, while in reality the dynamics of output are likely to be much more complex (see Dupasquier, Guay, and St. Amant, 1999). Furthermore, the results of the un-observable components method are often sensitive to the initial “guesses” for the parameters.
Methods Employing Structural Relationships
The Production Function Approach
The production function methodology represents the middle ground between a full-scale structural model and the various univariate approaches, such as filters and unobservable components (De Masi, 1997). The methodology in its simplest form involves the estimation of a production function, most often using the Cobb-Douglas form with two inputs, capital (K) and labor (L), and constant returns to scale. Thus, the production function is specified as:
where At is the level of total factor productivity (TFP) and αa and (1 -α) are the output share of capital and labor, respectively. The production function can either be estimated (e.g., as a cointegrating equation), or the value of a can be imposed (guided by historical data on capital’s or labor’s share in out-put).18 The level of TFP can be derived residual (Solow’s residual) from (6).
To derive potential output from the production function specification, the inputs and TFP have to be set at their trend levels, consistent with full employment and full capacity utilization. Labor input consistent with NAIRU (L*) could be derived by multiplying the labor force by (1-NAIRU). The trend component of TFP is also required and can be derived in a number of ways, including using smoothing techniques described above.
Extensions to the basic methodology include the refinement of inputs through quality-adjusting capital and labor; the use of more flexible functional forms for specifying the production function; and the explicit modeling of productivity by including technology-determining variables such as research and development, education, and spillovers (see Adams and Coe, 1990).
The production function approach has the attractive feature of determining potential output in a framework that can explicitly account for the contributions of capital, labor, and TFP to output growth. The production function, however, runs into a number of difficult conceptual and data problems, well documented in the literature (see Griliches and Mairesse 1995, and references therein).
-
The most basic criticism stems from the fact that the production function may not be identified because of simultaneity problems.19
-
The production function specification typically relies on an overly simplistic and probably restrictive representation of the technology.
-
The method requires the estimation of NAIRU or some NAIRU-consistent level of labor (L*). If smoothing techniques are used to derive (L*) and for the trend TFP growth, then the problems of trend estimation for GDP have shifted to trend estimation of inputs (and TFP).
-
Smoothing techniques for L* and TFP also have the problem of unreliable endpoints, which will thus affect the reliability of production function-based estimates of potential output at endpoints, and, importantly, for the most recent observation.
-
The production function approach is considerably more demanding in terms of data than filtering approaches, and the nature of the inputs data—particularly the capital stock calculalions—implies significant potential for measurement errors in inputs.
-
The production function is also likely to suffer from omitted variable bias.
Demand-Side Models
Bayoumi and others (1999) suggest that there are two basic methods of estimating the output gap; demand-side methods, which relate the output gap directly to measures of spare capacity in the economy and supply-side measures, such as the production function approach. Some researchers have combined these methods in simultaneous equation models, such as Adams and Coe (1990) or Blanchard and Quah (1993). Bayoumi estimates demand-side measures of the output gap by using a series of measures of slack in the economy—namely, the unemployment rate, the ratio of job seekers to job offers, capacity utilization rates, a combination of these measures, and an inverted Phillips curve. Measures of potential output and the output gap are derived from regressions of the long of real GDP on the slack variable (or variables) together with a polynomial in the time trend, and regression of a standard Phillips curve, again including polynomial time trend, that is inverted so that output is on the right-hand side of the equation.
The proposed demand-side method is straight forward and intuitively appealing. However, it does not lake into account the time series properties of the variables. In particular, as mentioned above, output series tend to be integrated, difference stationary, and not trend stationary. Thus the use of a time trend, whether linear or polynomial, is not an appropriate means to isolate the stochastic permanent component of output (i.e., potential output) or detrend output to derive the output gap. Furthermore, while the output series is likely to be integrated, economic theory would tend to suggest that the series for unemployment, capacity utilization rates, and the vacancy ratio are unlikely to be integrated in the long run. Thus, a simple linear regression may be spurious, in the sense of Granger and Newbold (1974).
Structural Vector Autoregressions (VARs)
The structural VAR approaches that have been employed recently to estimate potential output (see Scacciavillani and Swagel, 1999; Dupasquier, Guay, and St. Amant, 1999; and Cerra and Saxena, 2000) follow from Blanchard and Quah (1993). The general method combines aspects of both the Keynesian and neoclassical traditions in that it associates potential output with aggregate supply and supply shocks, and cyclical/transitory fluctuations with changes in aggregate demand. The Blanchard and Quah method employs a structural VAR in the sense that there are identifying restrictions imposed on the long-run effects of impacts on output and unemployment. These long-run restrictions are used to identify structural supply and demand shocks by allowing supply shocks to have a permanent effect on output, while demand shocks have only a temporary effect on output. The method has been extended, using the same idea of imposing long-run restrictions for the purpose of identification, to multivariate VARs and to the use of alternative variables (see King and others, 1991; Bayoumi and Eichengreen, 1992; Scacciavillani and Swagel, 1999: and Cerra and Saxena, 2000).
The approach is appealing, as it derives an estimate of potential output that employs a clear theoretical basis for the restrictions that identify permanent and transitory shocks. This method has several advantages.
-
It is not unduly restrictive in the dynamics imposed on the permanent shocks that affect potential output. Dupasquier, Guay, and St. Amant (1999) compare the structural VAR approach with two alternative multivariate methodologies, namely the Cochrane and multivariate Beveridge-Nelson approaches to deriving potential output, and find that the dynamics of permanent shocks are more complex than the random walk assumed by the other approaches.
-
The approach allows the dynamics of permanent shocks to be included in potential output. This is particularly appealing because one perverse implication of defining potential output as a random walk with drift is that, when the immediate effect of a permanent positive shock is smaller than the long-run effect, the output gap (observed output–potential output) is negative until the full effect of the positive shock has fed through.
-
The potential output gap measures derived are not subject to end-sample biases or increased uncertainty.
The structural VAR method also has certain disadvantages.
-
The approach is limited in its ability to identify different types of shocks (at most, there can be the same number of types as variables used in the VAR).
-
In most applications, the method assumes uncorrelated supply and demand shocks. Theory provides numerous instances where shocks have varying demand and supply characteristics (Cerra and Saxena, 2000)20
-
Finally, although the approach is not demanding in terms of data, the method is less straightforward to apply than many of the other approaches, as it requires some nontrivial programming.
References
Adams, Charles, and David Coe, 1990, “A Systems Approach to Estimating the Natural Rate of Unemployment and Potential Output for the United States,” Staff Papers International Monetary Fund, Vol. 37 (February), pp. 232–93.
Apel, Michael, and Per Jansson, 1999, “A Theory-Consistent System Approach for Estimating Potential Output and the NAIRU,” Economics Letters, Vol. 64 (September), pp. 271–75.
Bank Negara Malaysia, 1999, Annual Report (Kuala Lumpur).
Baxter, Marianne, and Robert G, King, 1995, “Measuring Business Cycles: Approximate Band-Pass Filters for Economic Time Scries,” NBER Working Paper No. 5022(Cambridge, Massachusetts, National Bureau of Economic Research).
Bayoumi, Tamim, and Barry Eichengreen, 1992, “Is There a Conflict Between EC Enlargement and European Monetary Unification?” NBER Working Paper No. 3950 (Cambridge, Massachusetts: National Bureau of Economic Research).
Bayoumi, Tamim, and others, 1999, Japan–Selected Issues (Washington: International Monetary Fund.
Blanchard, Olivier, and Danny Quah, 1993, “The Dynamic Effects of Aggregate Demand ami Supply Disturbances,” American Economic Review, vol. 83 (June), pp. 653–58.
Cerra, Valerie, and Sweta C. Saxena 2000, “Alternative Methods of Estimating Potential Output and the Out-put Gap: An Application to Sweden.” IMF Working Paper 00/59 (Washington: International Monetary Fund).
Charemza, Wojciech W., and Derek F. Deadman, 1992, New Directions in Econometric Practice: General to Specific Modeling, Cointegrution and Vector Autoregression (Brook field, Vermont: Edward Elgar).
Chhibber, Ajay, and others, 1989, “Inflation, Price Controls, and Fiscal Adjustment in Zimbabwe.” World Bank Policy, Planning, and Research Working Paper No. 192 (Washington: World Bank).
Coe, David, and John McDermoft, 1997, “Does the Gap Model Work in Asia?” Staff Papers, International Monetary Fund, vol. 44 (March), pp. 59–80.
Cogley, Timotby, and James Nason, 1995, “Effects of the Hodrick-Preseott Filter on Trend and Difference Stationary Time Series: Implications for Business Cycle Research,” Journal of Economic Dynamics and Control. vol. 19 (January-February), pp. 253–78.
de Brouwer, Gordon, and Neil Ericsson, 1998, “Modeling Inflation in Australia.” Journal of Business and Economic Statistics. vol. 16 (October), pp. 433–49.
De Masi, Paula, 1997, “IMF Estimates of Output: Theory and Practice,” IMF Working Paper 97/177 (Washington International Monetary Fund).
Dupasquier, Chantal, Alain Guay, and Pierre St. Amant, 1999, “A Survey of Alternative Methodologies for Estimating Potential Output and the Output Gap,” Journal of Macroeconomics. vol. 21. pp. 577–95.
Granger, Clive, and Paul Newbold, “Spurious Regressions in Econometrics,” Journal of Econometrics, vol. 2 (July), pp. 111–20.
Griliches, Zvi, and Jacques Mairesse, 1995, “Production Functions: The Seareh for Identification.” NBER Working Paper No. 5067 (Cambridge, Massachusetts National Bureau of Economic Research).
Guay, Alain, and Pierre St. Amant, 1996, “Do Mechanical Fillers Provide a Good Approximation of Business Cycles?” Technical Report No 78. (Ottawa: Bank of Canada).
Härdle, Wolfgang, 1990, Applied Nonparametrie Regression (New York: Cambridge University Press).
Harvey, Andrew C., and Albert Jaeger, 1993, “Delrending Stylized Fads and the Business Cycle.” Journal of Applied Econometrics, vol. 8 (July-September), pp. 231–47.
Hodrick, Robert J., and Edward C. Prescott, 1997, “Post-war U.S. Business Cycles: An Empirical Investigation.” Journal of Money, Credit, and Banking. vol. 29 (February), pp. 1–16.
Jonsson Gunnar, 1999, “Inflation, Money Demand, and Purchasing Power Parity in South Africa.” IMF Working Paper 99/122 (Washington: International Monetary Fund).
Khatri, Yongesh, and Solomons Solomou, 1996, “Climate and Fluctuations in Agricultural Output, 1867–1913.” University of Cambridge, DAE Working Paper No. 9617 (Cambridge, Massachusetts).
King, Robert, and others, 1991, “Stochastic Trends and Economic Fluctuations.” American Economic Review, vol. 81 (September), pp. 819–40.
Lim, Cheng H., and Laura Papi, 1997, “An Econometric Analysis of the Determinants of Inflation in Turkey,” IMF Working Paper 97/170 (Washington: International Monetary Fund).
Nelson, Charles, and Charles Plosser, 1982, “Trends and Random Walks in Macroeconomic Time Series: Some Evidence and Implications,” Journal of Monetary Economics. vol. 10 (September), pp. 139–62.
Quah, Danny, 1992, “The Relative Importance of Permanent and Transitory Components: Identification and Some Theoretical Bounds,” Econometrica, vol. 60 (January), pp. 107–18.
Razzak, Weshah, 1995, “The Inflation-Output Trade-Off: Is the Phillips Curve Symmetric? Evidence from New Zealand,” Reserve Bank of New Zealand Discussion Paper G95/7 (January), pp. 1–17.
Scacciavillani, Fabio, and Phillip Swagel, 1999, “Mea-sures of Potential Output: An Application to Israel,” IMF Working Paper 99/96 Washington: International Monetary Fund).
Söderlind, Paul, 1994, “Cyclical Properties of a Real Business Cycle Model.” Journal of Applied Econometrics, vol. 9 (December), pp. 113–22.
Stock, James, and Mark Watson, 1999, “Forecasting Inflation,” Journal of Monetary Economics, vol. 44 (October), pp. 293–335.
A more technical exposition is contained in the Appendix.
Discussions below on the problems of the structural break in 1998 and the use of end-period data in 1999 (which is still a point in the recovery path) highlight the tentative nature of this estimation, carried out in April 2000. The results should thus be viewed as indicative. The latest data together with the expected slowdown in 2001 suggest that the gap may now persist into 2002.
In such a case, business cycles are not necessarily driven by shortfalls or excesses of aggregate demand, but rather by rational agents reacting to productivity shocks by writing off old investments and reinvesting in new opportunities.
At the IMF, the production function approach has been applied mainly to industrial countries (see De Masi, 1997), Relatively few empirical studies attempt to estimate potential output for developing countries, mainly due to a lack of reliable data. Also, The concept of potential output may be less relevant when a largo fraction of output relates to primary commodities, whose production is supply-determined, or where there are large migration-related flows (if labor and ongoing structural change (associated with the “catch-up” phase of development).
In multivariate unobserved component models or structural VAKs, the relationship between inflation and output implicit in the definition of potential output can be imposed during the estimation of potential output (Dupasquicr, Guay, and St. Amant, 1999: Apel and Jansson, 1999; Scacciavillani and Swagel, 1999; and Cerra and Saxena, 2000).
The λ was determined according to the generalized cross-validation criteria.
The CSSM was first used to estimate the 1996 value, which was omitted from the series as missing and interpolated, then the CSSM was used on the new series with the interpolated 1998 value to obtain potential output.
Bank Negara Malaysia’s estimate 1 potential output in early 2000; based on a production function, found that the output yap was narrowing and potential output was back on its growth path (Bank Negara Malaysia, 1999).
See de Brouwer and Ericsson (1998) a similar approach.
Although not reported here, the P and SC statistics were used to lest the null by pothesis during the sequence of eliminating variables. The latter are selected according to the lowest f-values al each stage of the OLS estimations.
The time trend was traditionally included in equations explaining output as a measure of exogenous technical change. Willi moving awareness of The importance of the time series properties of variables being modeled land particularly of the regression residuals in the case of coinlegrating regressions), the use of The time trend has diminished. Some studies use direct measures of factors affecting technical change, such as Research and development or human capital proxies, to account for technical change (see. for example, Adams and Coe, 1990).
The choice of λ depends on the frequency of the data. For quarterly data, HP set λ=1600(Correspondingly, for annual and monthly data, λ might be set to 7 and respectively (see Microfit 4 Manual). For annual data, setting λ=100 has the effect of removing output cycles from the data with frequencies less than eight years (Scacciavillani and Swagel, 1999).
Guay and St. Amant (1996) find that the HP filter performs poorly in identifying cyclical components of time series that have a spectrum or pseudo-spectrum with Granger’s typical shape, winch is that of most macroeconomic time series.
In practice, we can use forecasts for a number of periods past the end of the sample, which should improve the reliability of the potential output measure inward the end of the sample period.
For a fuller discussion of the use of cubic splines, the data determined choice of λ and an empirical application, see Khatri and Solomou (1996).
For a given set of starting values and model parameters, the Kalman filter generates a sequence of optimal conditional predictions of the observable variables. The prediction errors are then used in a maximum likelihood routine to find the optimal set of parameters and corresponding estimates of the unobservable components.
Apel and Jansson (1999) argue that, although most economists would agree that there is a close relationship between the output gap and the unemployment gap and between these gaps and the development of inflation, most studies disregard at least one of these two conditions. They propose a multivariate unobservable component model that includes the first (Okun’s law) and the second (Phillips curve) relationship in the estimation of potential output and NAIRU.
If the production function is specified as a cointegrating regression, and output, labor, and capital have unit roots, then the residual-based TFP, if nonstationary, implies that a lack of a cohuegrating regression will be obtained if using the Engle-Granger test, and possibly even if using a system cointegration test (see Scacciavillani and Swagel, 1999). Estimates of α might be obtained from estimating the production function in first differences.
If producers choose inputs to maximize profits after observing output and input prices, then the production function disturbances will feed through into the choice of variable inputs; thus the exogencity assumption required for estimating the production function will not hold, and OFS estimates will be biased.
For example, a technology shock may affect supply but may simultaneously increase demand through wealth effects.