Practical Model-Based Monetary Policy Analysis
A How-To Guide

Contributor Notes

Author(s) E-Mail Address: aberg@imf.org, dlaxton@imf.org, pkaram@imf.org

This paper provides a how-to guide to model-based forecasting and monetary policy analysis. It describes a simple structural model, along the lines of those in use in a number of central banks. This workhorse model consists of an aggregate demand (or IS) curve, a price-setting (or Phillips) curve, a version of the uncovered interest parity condition, and a monetary policy reaction function. The paper discusses how to parameterize the model and use it for forecasting and policy analysis, illustrating with an application to Canada. It also introduces a set of useful software tools for conducting a model-consistent forecast.

Abstract

This paper provides a how-to guide to model-based forecasting and monetary policy analysis. It describes a simple structural model, along the lines of those in use in a number of central banks. This workhorse model consists of an aggregate demand (or IS) curve, a price-setting (or Phillips) curve, a version of the uncovered interest parity condition, and a monetary policy reaction function. The paper discusses how to parameterize the model and use it for forecasting and policy analysis, illustrating with an application to Canada. It also introduces a set of useful software tools for conducting a model-consistent forecast.

“All models are wrong! Some are useful” –George Box

I. Introduction

The goal of this paper is to describe a macro-model-based approach to forecasting and monetary policy analysis and to introduce a set of tools that are designed to facilitate this approach. We hope that many economists engaged in applied policy analysis at the Fund will adopt and eventually develop and enrich these tools, moving on to richer models as necessary. We also expect that insights these economists gain will in turn be useful to the broader academic community. A companion (Berg, Karam, and Laxton 2006) paper provides a broader overview of this approach and contrasts it with both the financial programming approach, which emphasizes the role of monetary aggregates in analyzing the monetary sector, and with more econometrically driven analyses.1

The model presented here ignores a number of critical issues. It abstracts from issues related to aggregate supply and fiscal solvency and does not explore the determinants of the current account, the equilibrium levels of real GDP, the real exchange rate, or the real interest rate. Its advantage is that it can be relatively transparent and simple and still allow consideration of the key features of the economy for monetary policy analysis. Broader considerations are of course essential to policy analysis and forecasting. The framework described below allows judgment about them to be incorporated into the model’s forecasts and the implications of different assumptions to be explored. But the model itself provides little direct help in articulating their determinants.

Because the model abstracts from so many issues, there may be a tendency to add various features to the model. One attractive feature of the general approach we suggest is that such models are, in somewhat more complicated form, at the heart of both central bank practice and current research efforts worldwide. Thus, there is enormous scope to develop the model further. It is important, though, that additions not detract from the clarity of the model. Where the skills and resources are available, the model may be extended to include full microeconomic foundations, stock-flow dynamics, nontraded sectors, and other features that allow explicit treatment of a number of additional issues.2 The model should not be allowed to become a “black box” however. One intermediate approach may be to build an auxiliary model for, say, fiscal policy, which considers the expenditure component of government spending, the implications for households’ intertemporal savings decisions, and so on. The overall implications of policy for aggregate demand could then be integrated into the core model.3

We present here a simple model that blends the New Keynesian emphasis on nominal and real rigidities and a role of aggregate demand in output determination, with the real business cycle tradition methods of dynamic stochastic general equilibrium (DSGE) modeling with rational expectations. The model is structural because each of its equations has an economic interpretation. Causality and identification are not in question. Policy interventions have counterparts in changes in parameters or shocks, and their influence can be analyzed by studying the resulting changes in the model’s outcomes. It is general equilibrium because the main variables of interest are endogenous and depend on each other. It is stochastic in that random shocks affect each endogenous variable and it is possible to use the model to derive measures of uncertainty in the underlying baseline forecasts. It incorporates rational expectations because expectations depend on the model’s own forecasts, so that there is no way to consistently fool economic agents.4

There is a significant literature emerging where models are derived explicitly from microeconomic foundations.5 The core elements include consumers who maximize expected utility and firms that are subject to monopolistic competition and adjust prices only periodically. Micro-foundations are ultimately critical and should play a central role in determining the model’s specification. More generally, the purpose of the model is to organize thoughts and data coherently, and so the model itself must be coherent and embody the economic ideas the modeler seeks to emphasize. However, we do not derive our baseline model below from microeconomic foundations. We allow for adaptive as well as rational expectations and substantial inertia in the equations. We could appeal to theory to justify these features, but we do not believe that, in practice, appealing to specific micro foundations serves to tie down magnitudes. We believe that this pragmatic approach to modeling represents the current state of the art in many policymaking institutions, where “modelers embrace theory but do not feel compelled to marry it.”6

We take an eclectic approach to parameterizing the model. In our view, and as reflected in the practice of most central banks, a purely econometric approach is not generally feasible. The economy is characterized by a high degree of simultaneity and forward-looking behavior, while data series are generally short and subject to structural change, for example in the monetary policy regime. In this context, it is not possible to reliably estimate parameters and infer cause and effect. More fundamentally, purely empirically based models do not permit the analysis of changes in policy rules or changes in the assumptions about how the economy works. For purposes of policy analysis, the model must first be identified, meaning that each of the equations must have a clear economic interpretation. Thus, rather than estimate a reduced-form, empirically based model whose structural interpretation is doubtful, we suggest using a broader collection of information to help parameterize a structural macroeconomic model.

This eclectic approach dominates the practice of most central banks engaged in inflation targeting. Perhaps the only novelty in what we suggest, from this point of view, is an emphasis on Bayesian estimation techniques which helps to formalize the estimation/calibration process that takes place in central banks. New tools are only just appearing that allow the analyst to easily ask the extent to which available data is consistent with her priors about the parameters of the model. For our purposes, this is the right question to ask, rather than allowing the data alone to identify or parameterize the model, or to test classical hypotheses such as whether interest rates affect the economy or not.7

The next section introduces and discusses a simple benchmark model. Section III discusses how to match the model with reality. Section IV describes how to conduct a forecasting and monetary policy analysis with the model. Section V puts the model through its paces, parameterizing it for Canada and carrying out a forecasting and risk assessment exercise. Section VI concludes. Appendix I describes tools available from the authors that implement the procedures described in the paper. Appendix II lists the main behavioral equations, identities, and statistical assumptions of the core model. Appendix III discusses the rudimentary supply-side features of the model, and Appendix IV describes the filtering methodologies in calculating the long-run or equilibrium notions of some main macroeconomic variables.

II. The Model

At its core, the model has four equations: (1) an aggregate demand or IS curve that relates the level of real activity to expected and past real activity, the real interest rate, and the real exchange rate; (2) a price-setting or Phillips curve that relates inflation to past and expected inflation, the output gap, and the exchange rate; (3) an uncovered interest parity condition for the exchange rate, with some allowance for backward-looking expectations; and (4) a rule for setting the policy interest rate as a function of the output gap and expected inflation.8 The model expresses each variable in terms of its deviation from equilibrium, in other words in “gap” terms. The model itself does not attempt to explain movements in equilibrium real output, the real exchange rate, the real interest rate, or the inflation target. Rather, these are taken as given. In all, it is an “aggregated” model that accentuates the flow dynamics of the model and leaves the full integration of stocks and flows for future research or a possible extension to this paper.

A. Output Gap Equation

Output depends on the real interest rate and the real exchange rate, as well as past and future output itself.

ygaptβldygapt+1+βlagygapt1βRRgap(RRt1RRt1*)+βzgap(zt1zt1*)+εty(1)

where ygap is the output gap, RR is the real interest rate, z is the real exchange rate, and the ‘*’ denotes an equilibrium measure of a variable. The output gap is measured as ygapt=100log(YtYt*), where Yt is the level of real GDP and Yt* is a measure estimate of the trend level of GDP.

Significant lags in the transmission of monetary policy imply that, for most economies, we would expect the sum of βRRgap and βzgap to be small relative to the parameter on the lagged gap in the equation. In particular, experience suggests that for most economies the sum of βRRgap and βzgap would lie between 0.10 and 0.20 and the parameter on the lagged gap term, βlag, would lie between 0. 50 and 0.90.9 We would expect a small coefficient on the lead of the output gap might range from 0.05 to 0.15. For industrial economies, we would expect that βzgap would typically be smaller than βRRgap and would depend on the degree of openness, with the ratio of βzgap to βRRgap converging toward zero for fairly closed economies.

B. Phillips Curve

Inflation depends on expected and lagged inflation, the output gap, and the exchange rate gap.10

πt=απldπ4t+4+(1απld)π4t1+αygapygapt1+αz[ztzt1]+εtπ(2)

This equation embodies the important idea that the fundamental role of monetary policy is to provide a nominal anchor for inflation and that placing weights on other objectives such as output cannot be inconsistent with this fundamental role. The important restrictions are that the coefficients on expected and lagged inflation sum to one and that the coefficients on the level of the output gap and the forward-looking inflation term be greater than zero. These restrictions ensure that monetary policy must be committed to adjusting the policy rate sufficiently aggressively in response to a nominal variable to provide an anchor to the system. There is an incipient unit root in the inflation equation, and stationarity of inflation emerges from elsewhere in the model, in particular from the monetary policy reaction function.11

A standard derivation starts with the assumption that firms adjust prices only at some given frequency. When firms do adjust prices, they optimally take into account expected inflation as well as the current markup of prices over marginal cost. With the output gap as a proxy for the markup (because high aggregate demand implies a large markup), this results in something like the above equation, but with απld = 1. A value less than 1 can be rationalized as resulting from the idea that there is a component of backward-looking expectations or some form of indexation.12 The parameter απld decreases with the importance of backward-looking expectations, which in some papers has been related to the (lack of) credibility of the central bank. αygap would also tend to be lower when adaptive expectations are more important, because the output gap works in part through its influence on expected future price changes.13

The behavior of the economy depends critically on the value of απld. If απld is equal to 1, inflation is equal to the sum of all future gaps. A small but persistent increase in interest rates will have a large and immediate effect on current inflation. In this “speedboat” economy, small recalibrations of the monetary-policy wheel, if perceived to be persistent, will cause large jumps in inflation through forward-looking inflation expectations effects. If απld is close to 0, on the other hand, current inflation is a function of all lagged values of the gaps, and only an accumulation of many periods of interest rate adjustments can move current inflation toward some desired path. In this “aircraft carrier” economy, the wheel must be turned well in advance of the date at which inflation will begin to change substantially.14 Where price-setting is flexible and the monetary authorities are fully credible, high values of απld might be reasonable, but for most countries, values of απld significantly below 0.50 seem to produce results that are usually considered to be more consistent with the data.

The value of αz determines the effects of exchange rate changes on inflation. It should be expected to be larger in economies that are very open and in cases when there is a high proportion of imported goods (either final or intermediate goods) that are eventually consumed after processing and then distributed as final consumption goods. Higher pass-through is generally observed in countries where monetary policy credibility is low and where the value-added of the distribution sector is low. There is also significant evidence of pricing-to-market behavior in many economies, suggesting that αz would be considerably smaller than the import weight in the CPI basket.15

C. Exchange Rate

We assume interest parity (IP) holds, so:

zt=zt+1e[RRtRRtUSρt*]/4+εtz(3)

where RRt is the policy real interest rate, RRtUS is the real U.S. interest rate, and ρt* is the equilibrium risk premium. The real exchange rate zt is defined so that an increase is a depreciation, in percentage points, and so zt=100log(PtFSPt), where PtF is the foreign price level, Pt is the domestic price level, and s is the nominal exchange rate expressed as domestic currency per unit of foreign currency. The interest rate term is divided by 4 because the interest rates and the risk premium are measured at annual rates.16

We assume a coefficient of one on the interest rate differential, as implied by IP. This result has been frequently challenged empirically. In defense of this assumption, the simultaneity involving interest rates and exchange rates makes any effort to estimate this coefficient particularly difficult. The estimated coefficient on the interest rate differential will be biased downward to the extent that the monetary authorities “lean against the wind” of exchange rate movements.17

We allow but do not impose rational expectations for the exchange rate:

zt+1e=δzzt+1+(1δz)zt1(4)

When δz = 1, we recover Dornbusch overshooting dynamics. In practice, overshooting often seems to take place in slower motion, and a value of δz somewhat less than 1 may provide more realistic dynamics. Unfortunately, there is little consensus across countries or observers on a reasonable value for δz.18

D. Monetary Policy Rule

We assume that the monetary policy instrument is based on some short-term nominal interest rate, and that the central bank sets this instrument in order to achieve a target level for inflation, π*. It may also react to deviations of output from equilibrium:

RSt=γRSlagRSt1+(1γRSlag)*(RRt*+π4t+γπ[π4t+4πt+4*]+γygagygagt)+εtRS(5)

The structure and parameters of this equation have a variety of implications, which have been explored in a large numbers of papers.19 An important conclusion from assessments of monetary policy in the 1970s, and one embedded in the structure of this model, is that a stable inflation rate implies a positive γπ.20 Beyond this, our framework does not allow explicit discussion of optimality, because of the absence of microeconomic foundations.21 But it may be useful to note that how strongly the authorities should react depends on the other features of the economy. If the economy is very forward-looking, for example, as implied by the “speedboat” version of the Phillips curve above, then only mild but persistent reactions to expected inflation should be enough to keep inflation close to target. If, on the other hand, the Phillips curve is of the “aircraft carrier” type, then a more aggressive reaction in the short run may be appropriate.

In setting policy, it is important to consider the potential losses stemming from making the wrong set of assumptions about the nature of the economy. If the economy is a speedboat (high απld), and the authorities assume otherwise and react aggressively to inflation deviations, the costs may not be too high. If the economy is an aircraft carrier (low απld) and the policymaker assumes it is a speedboat, small interventions (low γπ) that are expected to be adequate turn out to be inadequate in keeping the inflation rate from moving well away from the target. The implication is that it may be better to assume that the economy behaves a bit more like an aircraft carrier, just to be safe.

We assume that the central bank smoothes interest rates, adjusting them fairly slowly to the desired value based on deviations from equilibrium of inflation and output. This feature is not easily rationalized, though it is widely observed (a typical smoothing value for γRSlag in empirically based reaction functions generally falls between 0.50 and 1.0). Woodford (2003b) emphasizes that high weights on the lagged interest rate in the reaction function may represent optimal inertia in the policy rate, as it allows smaller current changes in short-term rates to have larger effects on rates of returns in financial markets and the real economy, if they generate expectations that the changes will persist. It is unclear whether uncertainty also contributes to inertia in the policy rate, with some approaches suggesting it does and other approaches suggesting that concerns about uncertainty should make policy actions more aggressive in some circumstances and less constrained by past policy rates.22

Instruments other than the interest rate can be accommodated, and arguments other than inflation and output may belong in the reaction function. A variety of papers have explored in particular the question of whether the exchange rate belongs in the reaction function.23 The exchange rate does not belong directly in the objective function as long as the exchange rate is a purely forward-looking variable, as it contains no unique information about the state of the economy. In a broader formulation, though, this may be the case.24

E. Solving the Model

Software tools now exist that largely automate solving the types of models that we are considering.25 It may nonetheless be useful to provide a brief overview of what is involved. The first step is to find the steady-state solution of the model. This can be found by replacing all leads and lags of each endogenous variable by the contemporaneous values, and then solving this transformed system of equations (in other words, replace ygapt,ygapt+1, and ygapt-1 with ygap and so on.) In our case, we find that ygap = 0, πgap = 0, RRgap = 0, and zgap = 0.

The second step is to unravel the dynamic properties of the model. Since the equations have leads and lags, it is not obvious where to start in tracing out the effects of a change in say εy. An approach that does not rely on linearity or other features of this particular model is to solve the model as a two-point boundary problem. The model is assumed to start in long-run equilibrium at some initial date and to end in equilibrium at some future date. A solution is a trajectory for the endogenous variables, given a set of values for the parameters and the exogenous variables (here, the shocks), that achieve these two end-point conditions.

A useful diagnostic device for understanding the properties of a model is to examine how the endogenous variables respond to a particular shock. For purposes of deriving such an impulse response, the shocks are set to zero except for one contemporaneous shock, and the model is solved as described above.

It may also be interesting to carry out a stochastic simulation, in which the shocks are drawn many times from assumed distributions and the model solved each time. The resulting variability in the endogenous variables may shed light on various properties of the model. For example, as described above, the implications of different monetary policy rules for output variability can be explored.

III. Building the Model

We have argued that a simple New Keynesian monetary model of the sort we outlined above can be useful for answering some important policy questions and can represent a redundant initial building block for an institution striving to develop a better set of tools to support more rigorous, consistent, and logical policy analysis. But the answers it produces will depend critically on the parameter values as well the type of model that is being constructed.

The older conventional approach to econometric estimation focused extensively on time series data, and the objective was to identify stable parameters across both time and the potential regime changes being considered by the analysts. This methodology, while useful for characterizing patterns in data, has failed systematically as a single unifying methodology for parameterizing macro models and for using such models for policy analysis. Consequently, some policymaking institutions have rejected estimation approaches based on fitting individual equations and placed more emphasis on the process of calibration in the design of macro models.26 Unfortunately, the word “calibration” initially was connected to the early real business cycle (RBC) literature at the time, which largely ignored some basic regularities in the time series data. Inside policymaking institutions, however, the shift to calibration did not mean that conventional estimation exercises were abandoned entirely, but simply that at the end of the day parameters were chosen based partly on econometric results and partly on examining the system properties of the models.

Calibration procedures continued to be developed in both academic and policymaking institutions. First, rather than focusing on a simple set of summary diagnostic statistics on macroeconomic variability, they were gradually extended to replicate impulse response functions in vector auto regressive (VAR) and other models. Second, in cases where calibrated models were successful at building on past empirical work by replicating macroeconomic dynamics that were not inconsistent with previous models, or disagreed in ways that simply pointed out the logical failures of earlier models, it represented an important argument for their use both inside and outside of policymaking institutions. The fact that several important opinion leaders threw their support behind the calibration approach also provided encouragement. However, the disadvantages of calibration methods remain well known. While it is possible to do logical and informal inferences about macro dynamics, the methodology does not lend itself very easily to formal statistical inference, which has always been an important priority in both academic and policymaking circles.

In this vein, there have been some very useful developments in Bayesian econometrics. Bayesian estimation theory has always offered a number of advantages over conventional estimation approaches, but was considered impractical because of enormous computational costs and complexity. However, the recent development of more efficient and user-friendly estimation procedures are making it more and more the technique of choice for applied macro modelers. Still, Bayesian techniques have only been tested on relatively small macro models and it may take some time to develop a critical mass of highly trained specialists who can use such methods in day-to-day work.

Because it is necessary to develop sensible priors and models before enduring the expensive computational investment that is required to simply estimate one model with Bayesian techniques, the strategy seems clear in the short run. Based on our own, albeit, limited personal experiences with these techniques, we suggest the following six-step strategy for economists who are interested in developing these small models for monetary policy analysis.

  1. Look at existing models and how they have been used to address interesting and relevant policy questions. Does a small model like the one suggested here have anything to offer, or is it necessary to move to a different type of model? How many resources will it take? Would it perhaps be useful to start with a smaller model even though one would hope to extend it relatively quickly to address a larger range of questions? Many policymaking institutions have preferred to follow this approach out of a fear that large-scale model projects may be difficult to implement, use extensive resources, and end up becoming black boxes that only the modelers really understand.

  2. As a practical step, it may be best to look at the properties of existing models and to use this as a guide for calibration and model design. This may be particularly useful in cases where different institutions may have a comparative advantage in certain aspects of the model-building process. For example, central banks with years of experience working under flexible exchange rate regimes will usually have invested significant resources building models and subjecting them to tests. In fact, in some cases they may have highly specialized information to help out with the calibration when it comes to identifying particular shocks or episodes where monetary policy was important in initiating a disturbance to the economy, such as in a disinflation process.

  3. Develop an initial working version of the model. Choose coefficients that seem reasonable based on economic principles, available econometric evidence, and an understanding of the functioning of the economy, and then assess the system properties of the resulting model. An iterative process evolves in which reasonable coefficient values are chosen, the properties of the model are examined, and changes are made to the structure of the model until the model behaves appropriately.

  4. Consider a formal model comparison exercise in which the model’s properties are compared with other models. In some cases this will suggest weaknesses in the model’s structure, while in many other cases there will simply be legitimate differences in views that are embodied in the models. In the latter case, these differences in views can be used as a rough, but perhaps very useful, measure of model uncertainty, and in such cases the model can be generalized to study the policy implications of alternative views. Indeed, a major benefit of using models for policy analysis is that they can provide a more systematic and structured approach for studying the policy implications of uncertainty.

  5. When resources are available, the next stage should be to consider a formal Bayesian estimation strategy. The Bayesian approach essentially formalizes the approach that practitioners have been using, but in a manner that allows for statistical inference because probabilities can be assigned to alternative models. However, even if the ultimate goal is to develop a model that is estimated with formal Bayesian techniques, it may be prudent to follow steps 1-4 as a method for developing experience in designing sensible models and priors.

  6. It is important to understand that model-building projects can contribute to the development of institutional knowledge and that this process of building a model and using a model for forecasting and policy analysis usually evolves over a period of years. Without models that embody the basic economic principles, policy advice runs the risk of being inconsistent across time in ways that sometimes defy logic and reason. In the first instance, the model project should seek to establish the basic guiding principles of the model or family of models to be considered based on the particular issues to be addressed. The model does not have to be perfect to be useful. Again, a very useful strategy is to start simple and then to extend the model over time as suggested by experience.

Two sorts of practical questions immediately emerge. How do we choose reasonable coefficients, and how do we judge the resulting performance of the model? Turning to the last question first, the adequacy of a model for policy analysis will depend on how well it captures key aspects of the monetary policy transmission mechanism. For example, the model should provide reasonable estimates of: how long it takes a shock to the exchange rate to feed into the price level; the size of the “sacrifice ratio,” in other words the amount of output that must be foregone to achieve a given permanent reduction in the rate of inflation; and how the inflation rate responds to the output gap.

A developer or user of the model should have a “good feel” for the performance of the economy in these terms. Indeed, a major advantage of this approach is that it forces attention on a few basic questions and places less emphasis in the short run on complicated technical econometric issues, which in many cases in the past have been more distracting than useful. We should emphasize that we would encourage model builders to eventually move to formal estimation procedures as long as the approach does not become captured by some inappropriate econometric methodology that is incapable of answering the right questions.27

Some of this “feel,” may come from an examination of “natural experiments,” in which the analyst effectively identifies a shock based on specialized knowledge of the policy process and can trace out its effects. For example, a look at past disinflation episodes may shed some light on measures of the historical sacrifice ratio. Another approach is to examine the properties of models that have been developed over time in central banks and other policy institutions. In cases where such models are used for day-to-day policy analysis, the results may correspond with the collective judgment of the policymakers and thus may represent a convenient insight into that judgment.28 A comparison with well-established models from similar countries may be helpful, and also where statistical approaches such as structural vector auto regressive (SVAR) model can shed light on some of these questions.

Having first focused on the overall performance of the resulting model, we next tackle the question of how to bring as much garnered information to bear as possible on the structure of the model and the choice of coefficients. The accumulated experience with similar models and theory as discussed in Section III and the variety of more systematic tools available can provide some guidance

In traditional calibration exercises of models with explicit micro-foundations, estimates of structural parameters such as the intertemporal elasticity of substitution and degree of habit persistence can be drawn from other studies. It is critical that the model we use be reasonably well grounded in theory, given that we do not apply a data-driven specification process. Moreover, an understanding of the underlying structural determinants of the main parameters may help with parameterization. However, while theory can rationalize for example the hump-shaped dynamics that are observed in the economy, at this point it does not serve to tie down precise magnitudes.29

A variety of econometric techniques can be useful in parameterizing the model. Single equation estimates can sometimes shed substantial light. For example, Orphanides (2003) provides reasonably clean estimates of the monetary policy reaction function of the U.S. Federal Reserve, using survey measures and the Fed’s Greenbook forecasts of expected inflation and the output gap to avoid the typical endogeneity and measurement problems associated with estimating forward-looking monetary policy reaction functions.30 Moreover, the application of various system estimation techniques to parameterize DSGE models and assess their performance is an active area of research. Recent developments in the application of Bayesian estimation to DSGE models of the economy represent a particularly promising way to permit the data to speak in a way that is consistent with the practical approach we suggest.31

Bayesian methodology involves posterior estimation and an evaluation of data likelihood given the parameters, the model, and a prior density. The analyst first approaches the data with a set of prior views on the appropriate model and the values of the parameters of the model, then asks whether those views ought to be modified in light of the data at hand—i.e., in what way do the data suggest a modification of those views? The methodology we have outlined can be thought of as the procedure for developing this prior information.

At this point we do not see the Bayesian estimation, or any econometric techniques, as alternatives to the parameterization techniques described here. An overly econometric approach to model-building does not as yet lead to the simple, identified, useful sort of models we have in mind. Indeed, an emphasis on data fitting can lead, in practice, to models that lack critical features such as forward-looking inflation expectations, on the grounds that such features are much harder to implement empirically. The absence of such features leaves estimated models misspecified and, of course, also more vulnerable to the Lucas critique, in that changes in the monetary policy reaction function, for example, lose a channel through which to influence other equations. While the goal is not to merely fit the data, one piece of the parameterization puzzle will be to ask what the data say about the parameters. The analyst incorporates this information into the model and moves on to the next step.

IV. Forecasting and Policy Analysis

A model of the sort we describe can be very helpful in the process of forecasting and making monetary policy, judging from the successful experience of a large number of central banks. The model itself does not make the central forecast. The forecast itself may come from some combination of several sources: forecasting models of various sorts; market expectations; judgments of senior policymakers; and, most importantly for the IMF, interactions with the country authorities themselves.32 The model can serve, however, to frame the discussion about the forecast, risks to the forecast, appropriate responses to a variety of shocks, and dependencies of the forecast and policy recommendations on various sorts of assumptions about the functioning of the economy.

In this section we outline a four-step procedure for creating and using model-based forecasts for monetary policy analysis, given that a model has already been developed:33

  1. The analyst starts with actual data on real GDP, the CPI (headline and perhaps core), the policy interest rate, the exchange rate, and the world interest rate and world CPI.34 This HNTF database (for historical and near-term forecast) may also contain purely judgmental forecasts out several quarters for these same variables. There are two distinct reasons for including pure judgment forecasts in this database.

    • It is usually appropriate to treat judgmental near-term forecasts as actual data, allowing the model forecasts to “kick in” only subsequently. For example, in many central banks, it is recognized that the model cannot do remotely as well as experts at forecasting the first one or two quarters. These are often based on preliminary related data (e.g., GDP may lag several months, but retail sales, industrial production etc. come out much more quickly).

    • The database may contain a much longer judgmental forecast, for example several years out, which may be interesting to analyze in light of the model. This could be a forecast provided by the authorities to Fund staff, for example.

  2. The analyst creates estimates (over the period covered by the HNTF database) of the inflation target, potential output, and equilibrium real interest and exchange rates. As described in Appendix IV, these estimates should be based on a variety of sources. Of course, the monetary authorities often announce an inflation target. Estimates for the other series may come from including smoothing the original series and/or judgment-based assessments.

  3. To carry out a forecasting exercise with the model, the analyst must know where the economy is at the start of the forecast period, from the point of view of the model. This includes the values of endogenous variables, which are in the database, and the current values of the residuals. One simple approach to finding these residuals is to create a “residual model” in which the residuals are declared to be endogenous and the normally endogenous variables such as the output gap and inflation are defined as exogenous. The model can then be solved for the values of the residuals that rationalize the paths of the endogenous variables in terms of the model.35

    The analyst now has a set of values for the residuals, past, present, and future, that is consistent with the model and the values of the endogenous variables in the HNTF database. The resulting historical residuals can be examined to see if they make sense, in other words to see if there are systematic deviations from the model that cannot be satisfactorily explained by factors known to be outside the model. For example, a large movement in the exchange rate that results from a fiscal crisis should cause large residuals in the exchange rate equation, because fiscal policy is not explicitly modeled. On the other hand, a consistent under-prediction of inflation would typically suggest that the model still needed work.36

  4. We now turn to generating the forecasts. Three types of forecasts may be interesting:

    • A pure judgment forecast maintains the residuals as they were calculated in step 3. Because of the way these residuals were calculated, the forecasts for the endogenous variables are exactly the values entered into the HNTF database. Where the HNTF database contains a several-year forecast of interest, these residuals are a measure of how much “twisting” is required to rationalize the forecast in terms of the model. The model is a gross simplification of reality, and the existence of residuals should not be a surprise. But sizable, serially correlated errors might suggest that the forecast may be ignoring the tensions inherent in the normal dynamic processes of the economy. Various experiments may be interesting, such as modifying the model itself to see under what assumptions the forecasts make more sense.

    • A pure model-based forecast results from solving the model under the assumption that all future residuals are zero.

    • The forecast can be a hybrid of that implied by the judgmental forecast and the model. The analyst manipulates the future residuals (these judgmental residuals are often called add-factors, temps, or constant-term adjustment factors) or directly sets certain future values of endogenous variables (“tunes”) to create a forecast that combines judgment with the model. For example, the residuals from the end of the HNTF database are a measure of how far the current situation is away from the predictions of the model. It may be prudent to allow these residuals to shrink over a few quarters to zero rather than jump to the model forecasts, on the grounds that the model is missing something about the current situation and whatever this is should not be expected to disappear overnight.37 More generally, the analyst may be interested in fixing a path for, say, the policy interest rate for two or three quarters and observing the outcome of the model. Or, the analyst may believe that the model is underestimating growth may and adjust the forecasts accordingly.

The hybrid forecast is at the heart of the forecasting and policy analysis exercise. First, it is in this context that the central forecast emerges, given that this forecast will rarely be purely model based but will involve substantial judgment about the evolution of the economy. Second, alternative scenarios, policies, and shocks can be examined. These would typically include:

  • Sensitivities to alternative assumptions. For example, the analyst might consider alternative paths for the exchange rate and examine the effects of these alternative assumptions on the forecast, under the view that the link between interest rates and exchange rates is both difficult to predict and not well captured by the model. The analyst might also explore sensitivities to changes in the parameters or structure of the model or assumptions about equilibrium values such as potential output.

  • Implications of various shocks. Where the model explicitly incorporates the shock in question, such as with aggregate demand, prices, or the exchange rate, this is straightforward. Otherwise, substantial judgment is required to decide how, say, a supply shock might manifest itself in terms of the model.

  • Alternative policy responses, including “add-factors” or “tunes” to the interest rate or changes in the monetary policy reaction function.

V. An Example

A. Overview

We now demonstrate the entire process. First, we design, parameterize and test a model of the Canadian and U.S. economies.38 Second, we carry out a forecasting exercise, including a variety of sensitivity analyses and risk assessments. We base our forecasting exercise on a set of judgmental forecasts for Canadian and U.S. variables that extends through 2009. We will use the model to assess and carry out sensitivity analysis with respect to this purely judgmental forecast. On our part, we choose the Canadian and U.S. economies in part to emphasize that calibration and use of this sort of model is not a mechanical exercise, but one that requires significant familiarity and experience working on these economies, as is essential to develop and use any model wisely.39

Most of the key equations of the model are presented in Appendix II. The equations follow a similar structure as in the canonical model presented in Section II, except that they have been modified to reflect a key feature of the Canadian economy, its dependence on the U.S. economy and the importance of oil prices. The simple canonical model presented earlier has been extended so that the Canadian output gap also depends on the U.S. output gap. In addition, to control for the effects in changes in the real price of oil on inflation and real GDP in both economies, the model has been augmented in the following way. First, we add the contemporaneous and lagged value of changes in the real price of oil to the inflation equations in both countries.40 Second, to control for effects on aggregate demand and supply on real GDP, we add a four-quarter moving average of the price of oil to the equation that determines potential output.

The output gap for Canada becomes:

ygapt=βldygapt+1+βlagygapt1βRRgapRRgapt1+βzgapzgapt1+βUSgapygaptUS+εtygap(6)

where the parameter βUSygap measures the importance of demand conditions in the United States on the Canadian economy. The equation for output gap for the U.S. economy has the same structure except the real exchange rate gap and the external output gap have been removed from the equation.41

The extended inflation equations for headline CPI inflation for both the Canadian and U.S. economies become,

πt=απldπ4t+4+(1απld)π4t1+αygapygapt1+αz[ztzt1]+α0πrpoil,t+α1πrpoil,t1+εtπ(7)

where πrpoil, t = 400[log(poilt/cpit)-log(poilt-1/cpit-1)] and the two additional parameters 0, α1) are based to a large extent on the importance of value added of oil and gas in the respective CPI baskets of the two countries. Because policymakers in both countries also focus on a core measure of inflation that excludes volatile items such as energy prices, we also include another equation that has a very similar structure except we eliminate the direct effects of oil prices in this equation.42 The equation for core inflation is:

πc,t=αc,πldπ4c,t+4+(1αc,πld)π4c,t1+αc,yygapt1+αc,z[ztzt1]+αc,3[π4t1π4c,t1]+εc,tπ(8)

where an additional term [π4t-1- π4c,t-1] has been added to the simple canonical inflation equation to allow for the possibility of relative price and real wage resistance; or more precisely that workers and other price setters may try to partially keep their prices rising in pace with past movements in the headline CPI. Note, if the parameter αc,3 is zero, oil price shocks in the model will have no effect on core inflation and may not necessitate an increase in interest rates. However, to the extent that higher oil prices are an important input into the production costs of many consumer goods, or if workers resist the reduction in their real wages in response to an increase in oil prices, there should be a role for oil prices to play in the model.43

The equation that determines the evolution of potential output in both economies becomes:

400[yt*yt1*]=gt*υrpoilπ4rpoil,t+εty*,(9)

where π4rpoil,t is the four-quarter moving average in the relative (real) price of oil and the parameter υ determines the effects of the relative price of oil on aggregate real GDP. gt* represents the growth of potential output, as specified in Appendix II. The specifications of these equations were chosen to approximately replicate the effects of oil price shocks in deeper structural models of the economy where oil is modeled explicitly as an input into the production process of tradables and nontradables as well as an input into an energy bundle that is combined with distribution services to produce a final energy bundle that enters the CPI.44

B. Building the Model

Some history and data

The top three panels of Figure 1 report year-on-year output growth, CPI inflation, and short-term interest rates for Canada, and, for reference, the United States. The bottom panel of Figure 1 reports the bilateral exchange rate between Canada and the United States. The close connection with U.S. GDP, interest rates, and inflation is evident. This sample covers a period of a flexible exchange rate regime, transitions between low (high) and high (low) inflation regimes in both countries, as well as a period for Canada that includes a formal inflation-targeting regime that started in 1992.

Figure 1.
Figure 1.

Output Growth, Inflation, Interest Rates, and Exchange Rates

(Canada and the United States)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

Figure 2 reports CPI inflation for Canada, but in this case we have also included the numerical values for the inflation target as well as one-percentage-point bands so that outcomes can be compared with the target path. Inflation has been slightly below the inflation target path, but has remained inside the bands about two-thirds of the time.

Figure 2.
Figure 2.

Inflation in Canada Compared to the Inflation Target Path Canada Year-on-Year CPI Inflation and Targets

(solid=actual, bold=target, dashed=bands)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

Figure 3 plots the trend and detrended measures of output, real interest rates, and the real exchange rate that will be used in the model. The trend measures of output in both countries were constructed using the LRX filter. The technical details of the LRX filter is explained in which Appendix III, which is simply a more general version of the original Hodrick-Prescott filter (1980) allows the analyst to impose priors for the trend values in situations where they believe they have more information than a naive univariate filter. In this case, for example, to be consistent with the staff’s view about the value of the output gap at the end of the sample, the trend value of output has been imposed to be 1.7 percent above actual output in 2004Q2 in the United States and 0.4 percent in the case of Canada.45 The historical values of the output gap are based on an HP smoothing parameter of 1600 (or higher). The trend real exchange rate is derived in a similar manner and is supposed to be equal to the actual real exchange rate in 2004Q1. The equilibrium short-term real interest rate is assumed to be constant at 2.5 percent in Canada and 2.25 percent in the United States, implying a steady-state, small-country risk premium of 25 basis points.

Figure 3.
Figure 3.

Model Variables for Canada

(Actual, Trend, and Detrended Series)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

* An increase denotes a real depreciation

Parameterization

All of the parameters of the models for Canada and the United States are reported in Tables 1 and 2, respectively. They have been calibrated on the basis of the model’s system properties and by comparing the dynamics of the model with other models of the U.S. and Canadian economies.46

Table 1.

Structural Parameters – Canada

article image
Table 2.

Structural Parameters – United States

article image

Output gap equations

The first set of parameters in the tables report the values for the output gap equations for the two countries. For Canada in Table 1, we employ values of 0.10 and 0.05 for the parameters on the lagged real interest rate terms and real exchange rate (βRRgap and βzgap). This implies that the effect of a 100-basis-point increase in the real interest rates on the output gap would be equivalent to a 2 percent appreciation in the real exchange rate. These two parameters combined with a large values on the lagged gap term (βlag= 0.85) and a small value on the lead of the output gap (βlead = 0.10) result in hump-shaped dynamics in response to monetary induced interest rate shocks that do a satisfactory job replicating what is found in the Bank of Canada’s QPM model of the Canadian economy. To explain the strong correlation between the U.S. and the Canadian output gaps, we employ a fairly large parameter on the U.S. output gap in the Canadian output gap (βUSygap= 0.25). For the United States economy, we simply turn off the open-economy linkages in the model by eliminating the real exchange rate effect. Because we impose the same values on the lead and the lag of the output gap, the presence of the exchange rate channel results in a slightly faster and stronger monetary transmission mechanism in the model for Canada.

Headline CPI inflation equations

We set the weight on the lead terms in the inflation equation (απld) to 0.20 in both economies. This implies weights on the lagged inflation terms (1-απld) of 0.80, implying significant intrinsic inertia in the inflation process. These parameters, combined with the weight on the output gap, will be the principal determinants of the output costs of disinflation.47 We set the weights on the output gaps in both countries at 0.30, yielding a sacrifice ratio of around 1.3 in both countries.48 This sacrifice ratio is significantly lower than reduced-form econometric estimates of the sacrifice ratio that were estimated over sample periods that included transitions from low to high inflation in the late 1960s and early 1970s, or from high to low inflation in the early 1980s. Using quarterly data from 1964 to 1988, Cozier and Wilkinson estimate the sacrifice ratio to be around 2. A very similar sacrifice ratio is embodied in the Department of Finance’s NAOMI model and an even larger estimate of 3 is in the Bank of Canada’s QPM model. This is by design, as these econometric estimates reflect the experiences of slow learning associated with moving between high and low levels of inflation regimes in the 1980s.49 As will be shown below when we do a formal analysis of the monetary transmission mechanism of Small Monetary Policy Models (dubbed SMPMOD), this implies that inflation will respond more strongly to monetary-induced interest rate hikes than in the Bank of Canada’s QPM or the Federal Reserve Board’s FRB-US model.

The next parameters in Table 1 and Table 2 are the weights on changes in the contemporaneous and lagged change in the real price of oil. These parameters are determined mainly on the basis of importance of energy in the headline CPI baskets as well as the importance of oil in the production of final energy components such as gasoline and fuel oil.

For Canada, the weight of the energy component in the CPI that is most directly affected by oil prices is 5.2 percent.50 The share of valued added in the production of these final energy products in the CPI that is represented by crude oil costs is estimated to be 30 percent, producing an elasticity of 1.6.51 We assume that two-thirds of this effect is observed after the first two quarters and that this direct pass-through effect is split evenly in the first two quarters. Thus, the two parameters on the oil price terms (α0,α1) in the Canadian headline inflation equations that determine the contemporaneous as well as the first lag effects of oil prices have both been set equal to .0053, or 2/3*1/2*.016.

For the United States, the estimates of these two parameters are slightly higher (0.006 instead of 0.0053) and are based on analysis of the FRB-US model. There it is estimated that roughly 5.0 percent of the CPI is accounted for energy prices that are highly sensitive to the price of crude oil. However, because the share of crude in value added is significantly higher (mainly because of lower taxes in the United States), the resulting elasticity ends up being slightly higher for the United States (1.8 instead of 1.6). We employ the same assumption that two-thirds of this effect is observed in the first two quarters. Based on this assumption, our oil price pass-through is slightly higher and faster in the short run than in the FRB-US model and has been adjusted upwards to reflect the views of analysts that follow high frequency movements in the data.

The parameter on the change in the real exchange rate in the headline CPI inflation for Canada has been imposed to be 0.10. This is a parameter where there is considerable uncertainty in our base-case calibration. This assumption assumes very weak short-run exchange rate pass through, given that imported goods represent around 30 percent of the CPI basket. However, this assumption is consistent with empirical evidence for countries such as Canada where there are large distribution services in taking these goods to markets as well as significant local currency pricing, where final goods prices are sticky in the retail markets in which they are sold. Despite this small parameter, the aggressiveness of monetary policy in the model may still represent an extremely powerful mechanism that can generate either very large or very small reduced-form exchange rate multipliers, depending on how much monetary policy is assumed to accommodate the initial shock in the exchange rate.

Core CPI inflation equations

The next set of parameters in Tables 1 and 2 reports the parameter values used in the core inflation equations.52 To maintain balanced dynamics with the headline inflation equation we impose the same parameters on the output gap as well as the leads and lags on inflation. However, we reduce the parameter on the real exchange rate by a factor of 3, in proportion with the importance of imported goods for this bundle of the CPI, which is approximately 10 percent relative to almost 30 percent for the headline CPI. The last parameter αc,3 measures the extent of catch-up in terms of past movements in headline CPI inflation. It has been set at 0.25 and provides a small mechanism through which supply shocks such as oil can feed into core inflation and necessitate an increase in interest rates.

Potential output equation

With the equations discussed so far, the only mechanism for higher oil prices to reduce output is through its effects on inflation, which necessitates adjustments in real monetary conditions and a temporary contraction in output to contain inflationary pressures. One way to build in additional effects is to allow potential output to decline in response to oil price hikes. To calibrate these additional effects, we have looked at some simulation results generated by a version of GEM where oil was modeled explicitly as a factor of production. It suggested a very simple specification where the growth rate of potential output is a function of a four-quarter moving average of past changes in the real price of oil. The parameter υ governs the magnitude of the effects and was set at 0.016 for the United States and 0.008 for Canada. The smaller effect of GDP on Canada reflects the fact that Canada is a net exporter of oil, which means there are significant offsetting effects that arise from an improvement in its terms of trade.

The reaction functions

Because a measure of core inflation was added to the model, we modified the simple reaction function so that interest rates in both countries depend on a weighted sum of headline inflation and core inflation. We assumed the same parameters in both countries. The weight on the lagged interest rate term γRSlag has been set to 0.50. This is significantly below the estimates provided by Orphanides (2003) and reflects an argument that this parameter is probably biased upwards because of misspecification errors.53 The weights on inflation and the output gap (γπ and γygap) have been set at 2.0 and 0.5, respectively. This produces plausible dynamics, but users of SMPMOD should feel free to experiment with these values to study the implications of alternative monetary policy responses on the dynamics of the model.

Responses of the model to interest rate shocks

To assess the speed and strength of the monetary transmission mechanism in SMPMOD, we compare the responses of output and inflation to monetary-induced interest rate shocks with the QPM model and the FRB-US model.

Figure 5 reports the results for a temporary 100-basis-point hike in the short-term interest rate in Canada. After the 100-basis-point hike in the first quarter, interest rates are then governed by the interest rate reaction function.54 The figure reports the shock-minus-baseline results for the year-on-year core CPI inflation because this is the measure that it is available from the Bank of Canada’s model. As indicated earlier, SMPMOD has a slightly stronger and faster monetary transmission mechanism than models that were calibrated (or estimated) based on sample periods that include transitions between high and low inflation regimes. As can be seen in Figure 5, the output gap responds somewhat more in SMPMOD, but there is a significant difference in the path for core inflation in the two models. The larger response of inflation in SMPMOD is simply a reflection of a smaller sacrifice ratio, where output gaps have more pronounced effects on the inflation process, and as emphasized this is by design because we believe the effects in existing models may be too small and too slow.55

Figure 4.
Figure 4.

Model Variables for the United States

(Actual, Trend, and Detrended Series)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

Figure 5.
Figure 5.

Dynamic Responses of Output, Inflation, and Short-Term Interest Rates in Canada 100-basis-point rise in the policy rate for one quarter (shock-minus-control responses for Canada) Comparing SMPMOD and the Bank of Canada Model (QPM)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

Figure 6 reports the results for a temporary 100-basis-point hike in the short-term interest rate in the United States. In this case to make the results comparable to some results made available from the FRB-US model, we have assumed that interest rates are higher by 100 basis points for four quarters before returning to the values determined by the interest rate reaction function.56 A very similar picture emerges, with SMPMOD having a slightly stronger and faster monetary transmission mechanism than FRB-US.

Figure 6.
Figure 6.

Dynamic Responses of Output, Inflation, and Short-Term Interest Rates in the United States 100-basis-point rise in the policy rate for four quarters

(shock-minus-control responses for the United States)

Citation: IMF Working Papers 2006, 081; 10.5089/9781451863413.001.A001

Of course, this represents a fairly cursory effort to explore the properties of the model. We are in the process of developing estimates of the model based on Bayesian methods. While such methods are becoming easy to apply on countries like the United States and Canada, it remains to be seen how useful such methods will be on other countries where data samples are short and there are a host of econometric pitfalls related to regime shifts and structural change.

C. Using the Model

In this section we show how a model like SMPMOD can be used for forecasting and policy analysis at the Fund. The first part discusses how baseline forecast scenarios can be created with the model, while the second section discusses some experiments that can be conducted after a baseline scenario is constructed. We relate the process to the existing procedures whereby country teams provide annual and quarterly forecasts for the World Economic Outlook (WEO) forecasts, producing the so-called WEO Baseline forecast.

Replicating the WEO baseline

The current WEO baseline is constructed mainly on the basis of judgment by the desks. Table 3 provides an example of a SMPMOD report that is generated from a solution to the model where the results for the main variables (GDP, inflation, interest rates, the price of oil, and the Canadian dollar) have been tuned so that the model exactly replicates the WEO solution. This is done by computing the residuals of all the behavioral equations so that the equations are consistent with the WEO baseline. The bottom panel of Table 3 reports the values of the historical residuals as well as the implicit judgment that has been added to the model over the forecast horizon to make it consistent with the WEO baseline. This is an example of how the desks could use a model as a consistency check on their own judgment, by examining the future values of the residuals that are consistent with their judgment.

Table 3.

Baseline Forecast with Desk Judgment – WEO Scenario

article image

Interestingly, all of the residuals tend toward zero, suggesting that the desk’s judgment is not inconsistent with the structure of the model, at least in the long run. However, equally interesting, the model is calling for larger hikes in interest rates (negative implicit judgment is being added to the interest rate reaction function). This should not be surprising, as the timing of hikes toward the neutral rate in both Canada and the United States has reflected concerns about the real economy (initially, concerns over deflation, and more recently, high oil prices and concerns over consumer and business sentiment).

One possibility, which is in the spirit with how models are used in inflation-targeting central banks, is to consider the judgmental scenario as an initial starting point from which further judgment can be added to the residuals. The main purpose of this type of exercise is usually to try to use the structure of the model to make the baseline more consistent with the underlying assumptions of a model as well as to make the process of developing the forecast more systematic and transparent. For example, if the leading indicators were coming in weaker than in the existing scenario, the desk may want to consider an alternative scenario where the analyst added negative judgment to the output gap to see the implications for inflation and interest rates.

In central banks that use similar types of models for building forecast scenarios, the final baseline scenarios are usually a combination of model solutions and judgmental input that is based in setting values of the residuals that eventually converge to zero over time, so that the weight on the judgmental input is much larger at the beginning of the forecast than over the medium term. This reflects a view that near-term forecasting accuracy can be improved substantially over pure model-based forecasts by looking at a broad array of information that is not included in the model.57 However, usually there is no fixed point where judgmental input stops and the model takes over, as this will depend on the types of factors that are affecting either the state of the economy or irregular movements in the data. Moreover, in organizing the forecast, the role of data analysis plays a much bigger role than the model in backcasting the past quarter and forecasting the next quarter, but even then the model can provide some useful input as a consistency check. The same is true of forecasts over the first two years of the forecast. While the model usually plays a much bigger role over this period than the analysis of the current data and the initial state of the economy, many of the stories that help explain the current state of the economy (say related to weak consumer and business sentiment) will have to make a logical transition into the medium term. But, after the first quarter, this judgment is usually entered through the residuals of the model rather than by tuning the model’s endogenous variables to an exact path based on a judgmental forecast. It is unclear if this approach for constructing the baseline would work best for the Fund given significant resource constraints and the other demands on the desks’ time.58

A simple step forward for achieving more consistency in the baseline scenarios

A simple step forward would be to continue to determine the baseline on the basis of judgmental input and to use models more intensively on the desk for risk assessments to quantify the importance of shocks as deviations from the last WEO baseline.59 As institutional views develop on the effects of shocks, and as the models start to be used more on the desks, following such an approach may result in an improvement in consistency over time even if a model is not used formally to construct the exact numbers in the WEO baseline.

Using the model for risk assessments

Putting the construction of the baseline aside, a major strength of using a model is that it can be relied upon to more efficiently generate risk assessments around the WEO baseline. We illustrate the use of the model for this purpose by incorporating a few shocks to highlight how the model might be used in practice.

Implications of a 50 percent increase in oil prices

We consider the implications of a temporary 50 percent increase in oil prices starting in 2005Q1. Rather than discussing the earlier detailed report (Table 4 is attached for those who are interested) which is designed for model operators and analysts, we turn to a brief set of summary tables and figures that are automatically updated after the experiments are run to illustrate the results. Table 5 reports results for the United States and Table 6 reports results for Canada. The panels on the left side of the tables report results on a quarterly basis, while the right side reports results on an annual basis.

Table 4.

Temporary 50 Percent Increase in Oil Price, 2005–07

article image
Table 5.

United States: Temporary 50 Percent Increase in Oil Price, 2005–07

Summary Table

Deviations from a WEO Baseline

(percent deviation) or [percentage point deviation]

article image