Getting to Know the Global Economy Model and Its Philosophy
Author:
Mr. Douglas Laxton
Search for other papers by Mr. Douglas Laxton in
Current site
Google Scholar
Close

This paper provides a nontechnical introduction to the IMF’s Global Economy Model (GEM). GEM is a modern dynamic stochastic general equilibrium (DSGE) model that has been designed for studying a range of issues that cannot be adequately addressed with reduced-form econometric models or an earlier generation of macromodels whose dynamic equations were not based on strong choice-theoretic foundations. Unlike earlier models, which were viewed as black boxes by many outsiders, GEM’s theoretical structure is much better connected with work in the academic community, making it considerably easier for outside researchers to apply it and extend it for their own work. To understand the basic philosophy behind GEM, we start by using the issue of exchange rate pass-through to understand how adding additional features to the model allows one to better understand issues related to the magnitude of exchange rate pass-through. We then provide a nontechnical introduction to what needs to be known to develop a steady-state calibration of the model. Finally, we end by summarizing other work on DSGE modeling at the IMF and lay out a few major priorities for the future. IMF Staff Papers (2008) 55, 213–242; doi:10.1057/imfsp.2008.11

Abstract

This paper provides a nontechnical introduction to the IMF’s Global Economy Model (GEM). GEM is a modern dynamic stochastic general equilibrium (DSGE) model that has been designed for studying a range of issues that cannot be adequately addressed with reduced-form econometric models or an earlier generation of macromodels whose dynamic equations were not based on strong choice-theoretic foundations. Unlike earlier models, which were viewed as black boxes by many outsiders, GEM’s theoretical structure is much better connected with work in the academic community, making it considerably easier for outside researchers to apply it and extend it for their own work. To understand the basic philosophy behind GEM, we start by using the issue of exchange rate pass-through to understand how adding additional features to the model allows one to better understand issues related to the magnitude of exchange rate pass-through. We then provide a nontechnical introduction to what needs to be known to develop a steady-state calibration of the model. Finally, we end by summarizing other work on DSGE modeling at the IMF and lay out a few major priorities for the future. IMF Staff Papers (2008) 55, 213–242; doi:10.1057/imfsp.2008.11

The Global Economy Model (GEM) is a modern dynamic stochastic general equilibrium (DSGE) model that is based on the new open-economy literature pioneered by Obstfeld and Rogoff (1995 and 1996).1 GEM’s modular structure has been designed to make it relatively easy for researchers to simplify the model’s structure by turning off certain features. Although different versions of GEM have been published in several papers, no single paper has documented the general structure of GEM or provided a detailed motivation for many of its features and options. The main purpose of this volume is to document GEM and to provide a flavor for some recent applications and extensions of the model.

The first two-country version of GEM was developed in 2001 and was later published in the Journal of Monetary Economics. Within a few months there was a small team of economists working on the model, which grew in size over time and later became a network of researchers from both inside and outside the IMF. Today, variants of GEM are used extensively at the central bank in Canada, Italy, Japan, and the Norges Bank. Building on the success of the GEM project, the IMF Research Department’s Modeling Unit has developed other DSGE models to address issues that require more elaborate theoretical structures. This includes the Global Fiscal Model (GFM), which focuses on medium- and long-term fiscal issues, and the Global Integrated Monetary Fiscal Model (GIMF), which has been designed for issues that involve both monetary and fiscal policy.2 These two models as well as GEM are now used extensively within the Fund for supporting our surveillance activities.3 Figure 1 provides a graphical representation of the number of papers that have been generated with GEM, GFM, and GIMF as well as some smaller DSGE models that were developed to look at specific issues that required a more specialized model structure.

Figure 1.
Figure 1.

Papers Using or Extending the Modeling Unit’s DSGE Models

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

Much of the success of GEM and the other DSGE models has been a result of their strong links to the academic literature. Indeed, before models like GEM macromodeling at the IMF and in other policymaking institutions was to a large extent disconnected with modeling in the academic community. However, with the emergence of the new open-economy literature and the general interest in developing models with better choice-theoretic foundations, there has been much more effective collaboration between researchers in academia and modelers in policymaking institutions. Indeed, a few other policymaking institutions have already replaced their earlier generation of macroeconometric models with these types of models for production work and several institutions are currently in the process of doing it.4

Arguably the benefits of this cooperation are just beginning. In fact, as discussed at the end of the paper, recent work in both academia and policymaking institutions on developing a better empirical methodology based on Bayesian theory has already made significant progress in eliminating the enormous gap between econometric theory and applied macromodeling. By supporting the development of tools like the DYNARE project, the IMF and a few other policymaking institutions have made a very useful investment that may make it possible in a matter of years to gradually retire an older generation of models that have been either calibrated or estimated with very unreliable estimation procedures.5

The main purpose of this volume is to document GEM and to provide a few applications and extensions of the model. This first paper in the volume has a few objectives. First, it is meant to provide a summary of GEM to nontechnical readers and to provide an example that shows how DSGE models can provide useful insights that go well beyond reduced-form macroeconometric models. Second, we provide a summary of the tool box we use at the IMF to build and solve models. Third, we provide some views about the weaknesses of these types of models and speculate how work on the models will likely evolve over time. This first paper is followed by a companion paper by Paolo Pesenti, which documents GEM’s equations and theoretical structure. The remaining six papers provide examples of applications and extensions. In addition to the GEM applications and extensions presented in this volume, interested readers can find nontechnical summaries of GEM applications and the IMF’s other DSGE models in Bayoumi and others (2004) and Botman and others (2007).

I. GEM and Exchange Rate Pass-through

GEM is a DSGE model that has rigorous behavioral foundations including a wide-ranging assortment of real and nominal adjustment costs that provide plausible short-run and long-run properties. It is these sources of inertia in adjustment within a clear theoretical framework that allow us to explore issues that cannot be adequately addressed with reduced-form empirical models. This section provides an intuitive nontechnical introduction to GEM by looking at the specific issue of exchange rate pass-through. In particular, we show that both short-run and long-run exchange rate pass-through will depend critically on how monetary policy responds to shocks and what type of shock is driving both the exchange rate and prices.

The model comprises firms that produce goods, households that consume and provide labor and capital to firms, and a government that taxes and spends. Consumption and production are characterized by standard constant elasticity of substitution (CES) utility and production functions. Many small firms produce differentiated goods using labor, capital, and intermediate goods such as components or commodities. Goods are differentiated, and as a result firms possess market power and restrict output to create excess profits—this setup allows a consideration of the effects of price markups. Capital and intermediate goods can be produced and traded while the labor force in each country is fixed, with workers making a choice between work and leisure. Workers also have market power and hence restrict their labor effort to raise their real wage.6 The workers own the firms in their country, and hence generate revenues in the form of wages and profits. Workers’ income is subsequently spent on home and foreign goods based on a CES utility function.

Figure 2 illustrates a two-country version of GEM. Production is split into two stages. In the first stage, labor (L), capital (K), and (possibly) land are used to create intermediate goods that can be traded (T), such as oil or components for manufacturing. These intermediate goods are then combined with additional labor and capital at home and abroad to produce final goods. A second feature, which is key for this paper, is the split of final goods into traded and nontraded goods. Another important feature to note is the distribution sector. There is strong evidence from microeconomic studies that the same goods are sold at different prices across countries. One way of incorporating this observation is to include a distribution sector in the model—see Corsetti and Pesenti (2005). All domestic and foreign goods need to go through this sector before they can be bought. As the distribution sector is assumed to consist of nontraded goods, this means that the final prices of all goods include both the cost of producing these goods and domestic distribution costs, so prices of imported tradable goods may not fully reflect changes in the exchange rate (even in the long run).7 Given the preferences of consumers, firms, and governments, the goods are distributed across countries.

Figure 2.
Figure 2.

GEM Flow Chart

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

GEM features a wide range of possible sources of inertia in both real and price adjustment. In general, these mechanisms are modeled to reflect quadratic costs of adjustment. In each case, the resulting dynamic equation is fully embedded in optimizing behavior by economic agents. A few parameters allow users to calibrate the relative strength of these costs in damping or delaying adjustment to shocks. For the basic model, these parameters have been chosen using a variety of information, but always with a view to ensuring that the model has reasonable simulation properties.

Key Mechanisms for Understanding Exchange Rate Pass-through

Two mechanisms are highlighted in this paper. The model has costly adjustment for both quantities and domestic prices of imported goods. Thus, when a relative price change occurs—the price of imported goods at the port of entry rises, say—there is gradual adjustment to this both in terms of the domestic retail price of imported goods and in terms of the adjustment of import volumes to the new relative price. These two mechanisms, together with the presence of a distribution sector, provide the hierarchy of sources of inertia in pass-through that we study. In each case, we report the results of the shock with all these mechanisms turned off (the base-case model), and then the results as we add them back in, starting with the distribution sector, then the price adjustment effect, and finally the quantity effect. With this last step, we recover the full-model properties.

Exploring Exchange Rate Issues

We now consider a number of shocks to the model designed to illustrate the features described above and how both the nature of the shocks and these features influence the results and their interpretation.8 As we noted above, both the nature of the shocks and the nature of adjustment dynamics are important in understanding exchange rates and pass-through.

The shocks are wide-ranging. We attempt to address the pass-through question directly in studying the case where the shock is indeed to the exchange rate itself, implemented as a portfolio preference shock where the country risk premium changes. The discussion becomes more complex when the shock comes from another source and exchange rates, import prices, and consumer prices are all responding to a shock arising elsewhere in the system. We begin this part of the discussion by considering a persistent increase in domestic demand in the Home economy and then show how pass-through and the long-run responses of all nominal variables depend on the speed at which monetary policy brings inflation back to target. We then exploit the sophisticated sectoral structure of GEM by considering permanent productivity shocks in the nontraded goods sector and the traded goods sector, considered separately.

Shock to the exchange rate (risk premium)

We begin with the closest we can get to the pure pass-through question by considering the shock to the country risk premium. The shock is temporary, but long-lasting and produces a long-lived nominal and real depreciation. The top panels of Figure 3 show the nominal exchange rate, the CPI, and trade prices. In the top-left panel, where all the model’s adjustment features are turned off, we cannot see the trade prices because they follow the nominal exchange rate precisely. The import component of the CPI will be rising in line with the import weight in the CPI. In the end, pass-through is one to one, in the sense that the drift in the price level (about 5 percent after 40 quarters) reflects fully the nominal depreciation.

Figure 3.
Figure 3.

Temporary Increase in the Country Risk Premium

(In percent)

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

The charts on the bottom of Figure 3 show the paths of inflation (year over year) and real and nominal interest rates. In the left panel, nominal rates are increased sharply to resist the inflationary consequences of the shock.

As we add the sources of inertia in import price pass-through, we see the scenario change dramatically, especially when we add import price adjustment costs and import volume adjustment costs. The former, especially, delays and damps the response of import prices in the CPI, which reduces inflation and the monetary response. In the top-right panel, we still see one-to-one pass-through in the end, but the level drift is much smaller, about a fifth of the result with all the mechanisms shut off. In the bottom panel, we see that policy response is quite muted, embodying the implications of the sluggishness in import price adjustment.

The charts in rows 2 and 3 show the real exchange rate and trade, and GDP and consumption and investment. Without adjustment costs, the effect of the real depreciation is stronger and faster. When we add these features, the initial trade response is muted. Without adjustment costs, the model shows GDP rising initially, in response to the strong exports; this disappears when we add the adjustment costs.

This exercise demonstrates that even when the shock comes into domestic inflation through import prices, there are many things that can break the “law of one price” at least at the CPI level. The presence of a distribution sector as well as both nominal and real rigidities all reduce both the short-run and long-run sensitivity of import prices and the CPI to changes in the exchange rate—see Figure 3. And the presence of these rigidities also means that the exchange rate must jump more in the short run to facilitate adjustments in the real economy.

Shock to domestic aggregate demand

Figure 4 reports results for a positive shock to aggregate demand where there is a long-lasting increase in both consumption and investment in the domestic economy. The real exchange rate appreciates through the first part of the shock to bring about the necessary reduction in exports. In the case where all the adjustment mechanisms are cut off, the resulting nominal appreciation reduces import prices and inflation, and initially, monetary policy must ease. However, as we add the model’s sources of inertia, we see the more normal picture emerging. CPI inflation rises in the face of the higher demand and monetary policy raises nominal interest rates from the start. Note that additional rigidities increases the nominal appreciation, initially, and in the end there is more upward price level drift. Here, the addition of volume adjustment costs plays an important role, blunting and delaying the response of exports, so that the overall increase in GDP relative to control is larger and longer, despite the extra monetary tightening.

Figure 4.
Figure 4.

Shock to Domestic Aggregate Demand

(In percent)

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

If we were looking at the pass-through question here, and did not know the shock, we would have some puzzles. The nominal exchange rate appreciates and import prices decline as CPI inflation increases. This is readily understood, given the model and knowledge of the shock, but if one attempted to treat the exchange rate movement as if it were at least part of the shock, it might appear that “the lags were longer” or something similar, and there might be a forecast of more disinflationary effects to come. This would be unfortunate if it interfered with the tightening of policy in response to the domestic shock.

Shock to domestic aggregate demand with different monetary policy responses

Several commentators have argued that declining exchange rate pass-through in reduced-form inflation equations simply reflects improved monetary policy frameworks that have anchored long-term inflation expectations and reduced the amount of persistence in the inflation process—see Taylor (2000). To illustrate this point in GEM Figure 5 shows what happens when we reduce the short-run interest rate response coefficient on inflation in the monetary reaction function from its base-case value of 0.50 to 0.25. We report just the comparison for the model with all the sources of inertia. The left panels repeat the full-model results reported in Figure 4 when the response coefficient on inflation is large enough that it stabilizes the increase in the price level at a value that is less than 2 percent above baseline. In the short run the nominal exchange rate appreciates by about 1 percent reflecting the increase in interest rates. However, since the shock disappears over time the exchange rate must eventually depreciate in line with the long-run increase in the price level. As can be seen in the right panels, reducing the response coefficient on inflation increases the long-run response of both the price level and the depreciation in the nominal exchange rate, showing that the response of monetary policy can be a very important factor in determining the long-run responses of all nominal variables in response to aggregate demand shocks. Obviously, when demand shocks represent an important source of variation in the data it will appear that there will be very strong pass-through from exchange rates into the price level when monetary policy is strongly accommodative.

Figure 5.
Figure 5.

Shock to Domestic Aggregate Demand with Less Aggressive Monetary Policy

(In percent)

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

Shock to productivity in the nontraded goods sector

Consider next a permanent real shock, an increase in productivity in the Home nontraded goods sector—see Figure 6. The higher productivity raises output in the nontraded goods sector, but there is insufficient domestic consumption demand for this output, so resources must switch to traded goods. To sell the extra tradables output, the real exchange rate must depreciate. This also helps switch consumption demand to domestic goods, because the relative price of imported goods rises.

Figure 6.
Figure 6.

Shock to Productivity in the Non-Traded Sector

(In percent)

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

On the nominal side, the real depreciation is associated with a nominal depreciation and imported consumption prices rise, as does the CPI, relative to control. Here, however, since relative domestic nontraded goods prices are lower, there is much less general CPI drift. Import prices are up sharply and remain so, but the overall CPI is less affected.

In this permanent real shock, the real relative price change dominates the results in terms-of-trade prices and the nominal exchange rate. In Figure 6, we see that the path for these variables is not much affected by the inertia terms. The inertia terms affect the domestic price results and the overall CPI. Note from the figure that when the full model operates, there is very little monetary response, whereas if the inertia terms are all turned off, the latent inflation pressures trigger a substantial initial tightening.

This is an interesting shock for the issue of pass-through. We see large and permanent increases in the nominal exchange rate and import prices with little effect on CPI inflation. The decline in domestic nontraded goods prices coming from the productivity increase largely offsets the effect of higher import prices in the overall consumption bundle. If one were looking at this as an exchange rate shock and trying to find pass-through effects like those that arise from a true exchange rate (risk-premium) shock, one would be very puzzled.

Shock to productivity in the traded goods sector

Figure 7 reports the results for a permanent increase in productivity in the Home traded goods sector. Here adding the distribution sector (going from column 1 to column 2) reduces the magnitude of the shock on GDP and its subcomponents as nontraded goods represent a larger share of value added in the baseline. Interestingly, when all these mechanisms are turned off the real exchange rate appreciates, but when these mechanisms are turned on the real exchange rate depreciates over the first 40 quarters of the shock. Although not reported, the very long-run response of the real exchange rate is an appreciation, but the presence of these mechanisms reverse the sign over a 10-year horizon. These simulations require permanent changes in relative prices, which imply a permanent wedge between the exchange rate and final consumption prices. Interestingly, adding the distribution sector tends to mute the response of inflation and interest rates, but then adding both nominal and real adjustment costs in trade then requires more adjustment in inflation and interest rates.

Figure 7.
Figure 7.

Shock to Productivity in the Traded Sector

(In percent)

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

II. Philosophy and Solution Procedures

The development of GEM and the Fund’s other DSGE models has benefitted enormously from a collection of tools and solution methods that have been developed over the years to support macromodeling in both academia and policymaking institutions. Having access to powerful tools and systematic methods for building, calibrating, and solving the models has made it much easier for new users in our modeling network to make progress quickly. This section starts by providing the basic philosophy that is used to build the models and then provides a detailed roadmap for developing a steady-state calibration, doing perfect-foresight solutions on the nonlinear dynamic versions of the models, or taking local approximations around an initial steady-state solution.

Basic Philosophy

The basic philosophy behind GEM and the other DSGE models developed at the IMF is much different than the first generation of large-scale econometric models. These earlier large-scale econometric models focused more on fitting individual equations by connecting variables and then assembling the equations on a computer. The dynamics and forecasts produced by these equations were usually not considered to be very reliable so the modelers would go back to their computers and fiddle with the equations and estimation routines until their models produced more plausible looking results.9 Following the abandonment of these models and their research program in academia a new research program was developed that focused more attention on dynamic optimization theory and understanding the importance of real and nominal rigidities for macrodynamics. At the same time a similar research program was in place inside a few academic and policymaking institutions trying to build a second generation of forward-looking macromodels for forecasting and policy analysis.10 This research program was also based on developing models with forward-looking behavior and although these models had a coherent theoretical structure many of the dynamic equations were not derived explicitly from strong choice-theoretic foundations. As the two research programs progressed it became natural to combine the best from both approaches. The end result has been a new generation of models that have stronger choice-thereotic foundations, and with sufficient nominal and real rigidities that can produce plausible macrodynamics.

The development of GEM and the IMF’s other DSGE models benefited enormously from work on the earlier models as well as the algorithms that were developed for solving these earlier models.11 The development of multicountry models involves considerably more complexity than small open-economy models because of a much larger set of variables and parameters. In addition, given that a major priority has been to develop models that produce realistic dynamics that are comprehensible to a fairly wide audience it has been important to design the models in a way that makes it relatively easy to simplify them. In the design and teaching of GEM this philosophy is usually referred to as the seed and onion philosophy. The term seed is used to refer to computer programs that generate different versions of GEM, which includes the full model depicted in Figure 2 as well as two smaller versions where we reduce the number of goods in the models. This includes a version that eliminates trade in primary goods as well as an even simpler version that further eliminates nontradables from the model. Building models with this type of modular structure has been a very useful approach for making it easy to peel off layers of the onion to understand how extending a model changes its predictions. In addition, because GEM was built to encompass a large collection of modern DSGE models developed in academia and other policymaking institutions, which tend to be much simpler in design, this approach has made it a great teaching device to go step by step from extremely simple models to more complicated versions.

The seed and onion philosophy does not stop at simplifying GEM’s goods structure. It is also used extensively to get the initial steady-state calibration, which is used as a baseline for nonlinear perfect foresight solutions or to take local approximations around the initial stead-state solution. To do this the program that generates GEM also generates steady-state analogue models for the three versions of the model, which simply involves replacing all the leads and lags in the models with the contemporaneous values of the variables.12 Removing all the leads and lags of the model creates a set of models that can be solved much easier than the larger dynamic versions, which have more complicated solutions because of the combination of dynamics and additional nonlinearities that arise from adjustment costs.

The theoretical structure of the GEM is consistent with an underlying steady-state equilibrium where inflation and all real variables are constant.13 However, because of the presence of significant nonlinearities in the structure of the model, solving for this steady-state equilibrium for new calibrations of the model can be nontrivial. The next part of this section describes a Newton-based algorithm that has been designed to handle the specific features of large models like GEM, which feature several nontrivial nonlinearities that arise from its emphasis on microfoundations, imperfect competition, as well as the existence of significant real and nominal rigidities.

Divide and conquer algorithms

The solution methods are referred to as divide-and-conquer (DAC) algorithms because they are based on a very simple idea in mathematics that it can be easier to solve a complex problem by breaking it down into a series of less complicated problems that are much easier to solve. We have found that in practice that this simple solution technique works very well for models like the GEM, which contains several nontrivial nonlinearities.

Although the DAC solution techniques are reasonably robust, they do require a basic understanding of the structure of the model as well as Newton’s method for solving a nonlinear system of equations. The section provides a simple introduction to what has to be known about the basic algorithms that we use to obtain both steady-state and perfect foresight solutions. To make the section accessible to a fairly wide audience we start with some examples of very small models so that researchers can understand more easily what is required to obtain solutions to GEM.14 We first start with a brief overview of the properties of Newton-based algorithms and then we use this discussion to motivate a robust and efficient solution procedure for computing steady-state solutions of the model.

GEM’s nonstochastic steady state

Nonlinear rational expectations models like the GEM can be written as a set of n equations

E t ( F ( y t r , . . . , y t , . . . , y t + s , x t r , . . . , x t , . . . , x t + s ) ) = 0 ( 1 )

where y is a vector of n endogenous variables and x is a vector of m exogenous variables. Variables may appear with a maximum of r lags and s leads. A basic requirement of this type of model is that the n equations must determine a unique solution for the current values of all the n endogenous variables, yt, given the values of the exogenous variables and of the lag and lead values of the endogenous variables. Because the GEM has been developed to be consistent with a steady state where inflation and all real variables are constant, the steady-state conditions can be imposed by simply transforming (1) by replacing all the leads and lag variables in the model with their contemporaneous values.15 The resulting system can be expressed as a nonlinear system,

F ( Y , X ) = 0 , ( 2 )

where Y is a vector of n endogenous variables and X is a vector of m exogenous variables. Because of the existence of nonlinearities in the GEM it is not possible to derive an analytical solution to Equation (1) and it is necessary to employ numerical methods to solve for the vector of n endogenous variables given the vector of m exogenous variables. Although there is no general algorithm that guarantees to find numerical solutions to this problem from arbitrary initial guesses of the values for the endogenous variables, the development of algorithms that can invert large sparse matrices has allowed researchers to move beyond elementary first-order iterative methods such as Gauss-Jacobi or Gauss-Seidel to more robust and efficient Newton-based algorithms.16

Brief review of Newton-based algorithms

The basic strategy of Newton’s method for solving (2) is to replace F with a linear approximation based on some initial guesses for the endogenous variables and then solve this system to obtain a better linear approximation of the F function. This iterative process continues until convergence is declared by passing a stopping criterion.

In practice, if the guesses for the endogenous variables are in the neighborhood of the true solution, Newton-based algorithms will find the true solution extremely rapidly. However, if these initial guesses are not in the neighborhood of the true solution, Newton-based algorithms may fail to converge.

The process of obtaining good starting guesses for Newton-based methods will obviously be more demanding in cases where models are being developed from scratch, or where researchers are actively investigating new structures for which they do not have any previous solutions and experience to use as a basis for determining initial guesses. In such circumstances, it is critical for researchers to understand the properties of their solution algorithms in some detail so that they can distinguish between convergence failures that arise from bad starting values, coding errors, or errors in the model’s theoretical structure.17 To understand the basic properties of Newton-based algorithms, it may be useful to start with some simple examples that illuminate some of the properties of these algorithms.

Example of a simple linear system

Equations (3) and (4) provide a simple linear two-equation representation of (2) and for further simplicity we will assume that the value of the exogenous variable x1 has been set equal to 2. As can be seen in Figure 8, the solution to this problem is (y1, y2) = (1,1).

Figure 8.
Figure 8.

Example of a Simple Linear Model

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

y 1 + y 2 = 0 , ( 3 )
y 1 + y 2 x 1 = 0 , ( 4 )

Solution of Equations (3) and (4) with Newton’s methods

Newton’s methods require four steps.

Step #1: To solve (3) and (4) with Newton’s method it is necessary to construct the Jacobian matrix of partial derivatives with respect to the endogenous variables. In this particular example F(Y, X) can be written as,

F ( Y , X ) = [ F 1 ( y 1 , y 2 , x 1 ) F 2 ( y 1 , y 2 , x 1 ) ] = [ y 1 + y 2 + 0 y 1 + y 2 2 ] = [ 0 0 ] , ( 5 )

where the Jacobian of F(Y, X) = 0 is,

F Y = [ F 1 ( y 1 , y 2 ) y 1 F 2 ( y 1 , y 2 ) y 2 F 2 ( y 1 , y 2 ) y 1 F 2 ( y 1 , y 2 ) y 2 ] = [ 1 1 1 1 ] . ( 6 )

Note that in this example the Jacobian does not depend on the values of y1 and y2 because the model is linear.

Step #2: Starting from an initial guess for (y1, y2), the system F(Y, X) = 0 can be evaluated to determine if all of the equations in the system hold. Define residuals (RES1 and RES2) for Equations (3) and (4) as the difference between the left-hand side (LHS) and right-hand-side (RHS) of each equation. In the simple example above, if the initial values for (y1, y2) were (0, 0) the values of RES1 would be equal to 0 and RES2 would be equal to −2. Given the way the model has been written in the form F(Y, X) = 0, these residuals will simply be equal to the value of F(Y, X) evaluated at (y1, y2) = (0,0). In this particular example the first equation passes through the coordinates (0, 0) and has a zero residual. However, the second equation crosses the y2 axis at (0, 2) and is not consistent with the initial set of guesses for y1 and y2. As will be shown below, the magnitude of these residuals and the value of the Jacobian will determine how large the values of y1 and y2 will change in each iteration.

Step #3: Starting from an initial guess of Y, Y(0), we can then solve (for k = 0)

[ F Y ( k ) Y ] Δ Y ( k ) = F [ Y ( k ) ] , ( 7 )

to calculate a Newton step,

Δ Y ( k ) = [ F Y ( k ) Y ] 1 F [ Y ( k ) ] , ( 8 )

and then perform a series of Newton iterations Yk+1 = YkYk. In the simple example above, if the initial starting values for (y1, y2) is (0, 0) then (7) will be

[ 1 1 1 1 ] Δ Y ( 1 ) = [ 0 2 ] , ( 9 )

and solving this for ΔY(1) results in the following Newton step.

Δ Y ( 1 ) = [ y 1 ( 1 ) y 2 ( 1 ) ] = [ y 1 ( 0 ) + Δ y 1 ( 0 ) y 2 ( 0 ) + Δ y 2 ( 0 ) ] = [ 1 1 ] . ( 10 )

Thus starting from (y1, y2) = (0, 0) Newton’s methods will raise both y1 and y2 by one unit in the first iteration and directly reach the true solution of (y1, y2)=(1, 1) in one iteration. As can be seen in the example above, if we replace the initial starting values for (y1, y2) to be equal to any real number (y1(0),y2(0)) Newton’s method will continue to find the true solution in one iteration. Thus, a very important property of Newton’s method is that it will find the true solution in the first iteration starting from any arbitrary set of starting values for the endogenous variables. This is a very important strength of Newton’s methods over other methods. For example, using a standard linear approximation of the monetary transmission mechanism that features significant lags between the policy rate and inflation, Armstrong and others (1998) show analytically that while Newton’s method is guaranteed to find a solution in one iteration, first-order iterative solution techniques such as Gauss-Seidel may take many iterations to converge, if indeed they converge at all. This lack of robustness with first-order methods is the one reason why many model builders have abandoned first-order methods in favor of Newton-based methods.18

Example of a simple nonlinear system

Equations (11)–(13) provide a simple nonlinear three-equation representation of F(Y, X).

y 1 + y 2 = 0 , ( 11 )
y 1 2 + y 2 2 x 1 = 0 , ( 12 )
log ( y 1 ) + y 3 = 0. ( 13 )

This example has been chosen because it can be used to illustrate several of the problems that model builders face when debugging models like the GEM, which represent a large collection of simultaneous linear and nonlinear equations as well as recursive blocks that may contain both. As can be seen in Figure 9, Equation (11) is a simple linear equation that passes through the coordinates of (0, 0) for (y1, y2) and has a constant slope parameter equal to one. Equation (12) is a simple nonlinear equation that defines a circle centered on the coordinates of (0, 0) for (y1, y2) with a radius equal to 2, which is approximated by the number 1.41 in Figure 9. As can be seen in Figure 9 the solution to the system that includes only Equations (11) and (12) has multiple solutions for (y1, y2) at (1,1) and (−1, −1). Equation (13) is a simple nonlinear equation that rules out the second solution because y3 is equal to the log of y1, a function whose domain is restricted to be strictly positive values for y1. Equation (13) also represents a recursive block of the complete system because the variable y3 depends on y1, but y1 and y2 do not depend on y3.

Figure 9.
Figure 9.

Example of a Simple Nonlinear Model

Citation: IMF Staff Papers 2008, 004; 10.5089/9781589067233.024.A001

As indicated in the previous section, Newton’s approach to solving a nonlinear system of equations, such as (11), (12), and (13), starts by linearizing the system around some initial starting values for (y1, y2, y3), finding a solution of the linearized model, and then using this solution to update the guesses for (y1, y2, y3). This process continues until Newton’s method either converges or fails to converge. In this simple example it is obvious that a negative value for the initial guess of y1 would be a poor choice and may cause a Newton-based algorithm to fail.19

Solution of Equations (11)–(13) with Newton’s methods

Again Newton’s methods require four steps.

Step #1: To solve the system defined by (11), (12), and (13) with Newton’s method, it is necessary to construct the Jacobian matrix of partial derivatives with respect to the endogenous variables. In this particular example the Jacobian of F (Y, X) = 0 is the following matrix.

F Y = [ 1 1 0 2 y 1 2 y 2 0 y 1 1 0 1 ] . ( 14 )

Note that in this example the Jacobian depends on the values of y1 and y2 because the model is nonlinear.

Step #2: Starting from an initial guess for (y1,y2,y3), the system F(Y, X) can then be evaluated to determine if all of the equations in the system are consistent with these starting guesses. Define residuals (RES1, RES2, RES3) for Equations (11)–(13) as the difference between the LHS and RHS of each equation. In the simple example above, if the initial values for (y1, y2, y3) were all set equal to 0.1 the values for (RES1, RES2, RES3) would be (0, −1.9800, 2.4026) and the numerical version of the Jacobian in this initial iteration would be the following:

F Y = [ 1 1 0 0.2 0.2 0 10 0 1 ] . ( 15 )

Step #3: Starting from this initial guess of Y, Y(0), we can then solve

[ F Y ( k ) Y ] Δ Y ( k ) = F [ Y ( k ) ] , ( 16 )

to calculate a Newton step,

Δ Y ( k ) = [ F Y ( ik ) Y ] 1 F [ Y ( k ) ] , ( 17 )

and then perform a series of Newton iterations Yk+1 = Yk+ΔYk. In the simple example above, if the initial starting values for (y1, y2, y3) are (0.1,0.1,0.1), then (16) becomes

[ 1 1 0 0.2 0.2 0 10 0 1 ] Δ Y ( 1 ) = [ 0 1.9800 2.4026 ] , ( 18 )

and solving this results in the following Newton step.

Δ Y ( 1 ) = [ y 1 ( 1 ) y 2 ( 1 ) y 3 ( 1 ) ] = [ y 1 ( 0 ) + Δ y 1 ( 0 ) y 2 ( 0 ) + Δ y 2 ( 0 ) y 3 ( 0 ) + Δ y 3 ( 0 ) ] = [ 5.0500 5.0500 47.1971 ] . ( 19 )

As can be seen from (19), a linearization of the model initially around the points for (y1,y2,y3) equal to (0.1, 0.1, 0.1) results in a big Newton step that significantly overshoots the true solutions for (y1,y2,y3). However as can be seen in Table 1, which reports the process of convergence after the first iteration, an extremely accurate approximation of the true solution is achieved within six Newton iterations. This simple example illustrates a number of points about some of the properties of Newton’s method.

Table 1.

Newton’s Method Applied to the Simple Nonlinear Three-Equation Example

article image
  1. Even in cases when the starting guesses are a long way from the true solution, if Newton’s method does converge, in some cases it can perform very well. However, this is not a general result and will depend on the types of nonlinearities in the model. If the initial guesses for the endogenous variables are a long way from the true solution there can be convergence problems. This obviously poses a more serious problem for researchers who are attempting to build models from scratch as they may not have any previous experience picking starting values.

  2. If the starting guesses are in the neighborhood of the true solution where perturbations are well approximated by a linearized version of the model, then Newton’s method will find the true solution extremely quickly in a few iterations. Thus, once a solution has been obtained that can be used as starting values in future computations it is very difficult to beat Newton-based methods with other methods.

  3. The solution from Newton’s method is extremely accurate.

  4. Newton’s method is not foolproof and does require some knowledge of the structure of the model to use it efficiently. In the nonlinear example above, had the researcher used starting guesses that were negative instead of positive, Newton’s method would have attempted to converge to a solution where y1 and y2 was equal to −1. In these cases the convergence process would have failed as the computer would have attempted to take the log of a negative number. Furthermore, because the convergence process depends on the analytical form of the Jacobian, which in turn depends on exactly how each of the equations has been coded, this can affect the convergence process in significant ways.20

Solving the GEM with DAC algorithms

As we have tried to illustrate in the previous section, Newton-based algorithms in some cases can provide a very powerful tool for deriving the steady-state solution of the model. This is particularly the case when the model has already been developed so that previous solutions and experience working with the model can be drawn upon to develop good starting guesses for the endogenous variables. However, when building a model from scratch there are two approaches to solving for the initial steady-state equilibrium. One approach is to code the model and then fiddle with the initial values for the endogenous variables until the model builder finds a solution. This approach can be time consuming and difficult to replicate. The second approach is to employ a DAC strategy.

A DAC strategy involves breaking problems that are difficult to solve into a series of problems that are easier to solve. The particular DAC strategy that is employed here exploits what is known about the two most basic properties of Newton’s solution method for solving large systems of nonlinear equations. First, as shown in one of the examples above if the model is linear, Newton’s method is guaranteed to find a solution in one iteration. Second, if the model does not contain large nonlinearities Newton’s method will find the solution extremely rapidly. These two properties suggest a very simple DAC strategy that involves initially finding the solutions of versions of a model that is easier to solve and then using these solutions as starting values for solving more complicated versions.

Very simple example of a DAC algorithm

The simplest example of a DAC strategy that is used to solve for the GEM’s steady state is when we already have a solution for a steady state based on some existing calibration of the model, but we would like to obtain another calibration of the model with nontrivial changes in either the structural parameters or the exogenous variables. For example, after a model has been developed and an initial steady-state solution to the model exists we will have a solution to (20),

F ( Y ( 0 ) , X ( 0 ) , θ ( 0 ) ) = 0 , ( 20 )

where Y(0) is an initial vector of endogenous variables; X(0) is an initial vector of exogenous; and θ(0) is an initial vector of parameters. In this case if we want to develop a new calibration of the model that is based on different assumptions for the parameters τ and exogenous variables X (say θ(τ) and X(τ)) we may want to start with the existing values of Y(0), θ(0), and X(0) and then gradually eliminate the difference between (20) and what we would like to compute which is (21).

F ( Y ( τ ) , X ( τ ) , θ ( τ ) ) = 0. ( 21 )

Thus, based on our knowledge of the fundamental property of Newton-based algorithm, which is that it will find the true solution extremely quickly when the starting values for Y are in the neighborhood of the true solution Y(τ), we can define a DAC step j so that each DAC step defined by (22)

F ( Y ( j ) , X ( j ) , θ ( j ) ) = 0, ( 22 )

is always in the neighborhood of the last solution F(Y(j−1)),X(j−1)(j−1) = 0.

Interestingly, the discussion above did not say anything about the choice of the magnitude of the DAC steps, partly because it does not matter much in practice given the efficiency of available sparse matrix code. Thus far, we have only experimented with two types of simple methods for determining the length of each DAC step, and we plan to leave it to others to experiment with other procedures for determining the optimal length of each DAC step for the particular problem that they are interested in. We would like to emphasize that this simple DAC approach is quite general and has also been used to solve perfect foresight problems on the nonlinear versions of the models using the stacked-time algorithms available in TROLL’s simulation toolbox.

Strategies for getting initial solutions for new or extended models

The discussion above suggests a very simple strategy for obtaining solutions for models where there are no previous solutions or for models that contain significant nonlinearities. Recall, a basic property of this approach is that if any solution can be found to a model it is relatively straight forward to find alternative solutions using previous solutions as starting points.

For two-country models like GEM depicted in Figure 2 this suggests a very simple robust and efficient strategy for obtaining steady-state calibrations of the models. For example, assuming countries have identical size, tastes, and production capabilities we will know in advance that relative prices such as the real exchange rate will be equal to 1 in the initial steady state. Thus, one strategy for obtaining a solution for a two-country model is to start off with assumptions for parameter and exogenous variables that make it easier to guess values for the endogenous variables and then use these solutions as guesses for the desired calibration. In addition, models like GEM will generally contain functions that include parameters values that simplify functions in ways that make it easier for model builders to know more about the solutions. For example, it is well known that CES functions nest Cobb Douglas functions and for the case of the latter the exponents will represent share parameters, such as labor or capital’s income share in a production function. Thus, one strategy is to start off with elasticities of substitution that are close to 1 and then once a solution has been found to take a series of DAC steps to move these parameter values to their desired levels.

Strategies for getting steady-state calibrations quickly and reliably

In most cases researchers may want to obtain a calibration for the steady state that is consistent with desired values for the model’s endogenous variables. For example, the researcher may want to calibrate certain variables such as trade, consumption, or investment measured as a share of GDP to be consistent with particular average values from the national accounts. Obviously, in a general equilibrium model the values for these variables will depend on a significant number of parameter values that are related to the underlying demand and supply functions in the model and it would be a very time-consuming process to change these parameters to obtain results that are close to the desired calibration. To facilitate efficient and robust calibrations of the models we have developed procedures that temporarily make certain endogenous variables exogenous and then find the values of some truly exogenous variables or parameters that will support an equilibrium where these endogenous variables are tuned to their desired values. For example, if we desire a particular value of the real exchange rate and the export-to-GDP ratio in a particular country we will typically back out a taste parameter in the utility function that will determine the demand for imports in the two countries, which given an assumption for balanced trade and export supply functions, will determine trade flows and the equilibrium real exchange rate. In the programs that have been developed to calibrate GEM this mapping includes a large number of variables and users are provided a list of variables that they will need target values for to obtain a calibration relatively quickly. Again, this process of finding the desired calibration is both robust and efficient because of the DAC algorithms.

III. Model Development Priorities

There are two major areas for future development both inside and outside the Fund. The first involves improving the IMF’s DSGE models’ structures to include better macrofinancial linkages and the second involves exploiting Bayesian methods to take the models to the data. This section discusses a bit of the work that is already underway or is in the planning stage.

Significant effort has been underway for some time now in both academia and policymaking institutions to create models with better macrofinancial linkages. Having extended the basic analytical structure of modern DSGE models so that we can capture the effects of fiscal policy, the Modeling Unit has started to incorporate different types of financial accelerators into GIMF and smaller DSGE models that focus on specific issues.21 These features will allow us to better understand the implications of boom and bust cycles caused by the interaction of bank credit and shocks that cause large movements in asset prices. More importantly, unlike reduced-form econometric models, which may be very useful for conjunctural analysis, these extended DSGE models will contain sufficient structure that should allow us to study the role of different types of policies for minimizing the economic costs associated with these boom and cycles.22

Over the last few years there have been enormous advances applying Bayesian methods to macromodels. Most of the work thus far has been on either single country open-economy models or closed economy models.23 Applying these methods to multicountry versions of the models presents some difficult challenges given the large number of structural parameters and stochastic processes that will need to be estimated. However, the potential payoff is enormous as Bayesian methods offer many advantages over classical estimation procedures. First, at a general level they will be very useful in helping to bridge the enormous gap between econometric theory and how model builders parameterize models in practice. Second, by explicitly accounting for and penalizing the use of priors in the model-building process they will allow us to do more meaningful statistical inference. Third, in models where some parameters are weakly identified by the data and classical procedures break down, the use of priors can help prevent the model’s parameters from wondering off into implausible regions. This robustness property of Bayesian methods can be very helpful in the model-building process as it provides solutions that can be analyzed. Fourth, in practice model builders can specify fairly flexible stochastic processes, which are necessary in many cases to generate sensible expectations and impulse response functions for standard shocks. Fifth, the number of stochastic shocks can be larger than the number of observable variables, allowing the model to interpret new data and to generate predictions that are much more flexible than by using classical procedures on models where parameters are weakly identified. Sixth, the estimation strategy does not have to involve prefiltering of data and in such circumstances there will be a better mapping between the models in-sample fit and forecasting ability. Seventh, the model is estimated as a system, which in practice can be expected to perform better when there is important simultaneity, which is the basic nature of any DSGE model. Eighth, the presence of units roots does not create the enormous problems that are generated from classical econometric theory. Ninth, economists and policymakers are interested as much in the underlying uncertainty in the parameters estimates as they are in the underlying point estimates. Forecasters and policymakers are keenly interested in how parameter uncertainty translates into measures of forecast confidence bands and how tail risks might influence their decisions and the potential costs of making the wrong decisions. These are only some of the many practical benefits of Bayesian estimation procedures, and given recent advances in developing user-friendly routines to deploy these procedures to a much wider group of people it is very likely that more people will abandon their old ways of doing things and become interested in them.

REFERENCES

  • Adolfson, M., S. Laseen, and J. Lind, forthcoming, “Evaluating an Estimated New Keynesian Small Open Economy Model,” Journal of Economic Dynamics and Control, (October), pp. 132.

    • Search Google Scholar
    • Export Citation
  • Armstrong, J., R. Black, D. Laxton, and D. Rose, 1998, “A Robust Method for Simulating Forward-Looking Models,” Journal of Economic Dynamics and Control, Vol. 22, pp. 489501.

    • Search Google Scholar
    • Export Citation
  • Bayoumi, T., D. Laxton, and P. Pesenti, 2004, “Benefits and Spillovers of Greater Competition in Europe: A Macroeconomic Assessment,” ECB Working Paper No. 341 (Frankfurt, European Central Bank).

    • Search Google Scholar
    • Export Citation
  • Bayoumi, T., D. Laxton, and P. Pesenti, and others 2004, “GEM: A New International Macroeconomic Model,” IM Occasional Paper No. 239 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Black, R., D. Laxton, D. Rose, and R. Tetlow, 1995, “The Bank of Canada’s New Quarterly Projection Model: The Steady-State Model,” Technical Report No. 72 (Ottawa, Bank of Canada), January.

    • Search Google Scholar
    • Export Citation
  • Blanchard, O., 1985, “Debt, Deficits, and Finite Horizons,” Journal of Political Economy, Vol. 93, pp. 22347.

  • Botman, D., D. Muir, D. Laxton, and A. Romanov, 2006, “A New-Open-Economy-Macro Model for Fiscal Policy Evaluation,” IMF Working Paper 06/045 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Botman, D., D. Muir, D. Laxton, and A. Romanov, P. Karam, D. Laxton, and D. Rose, 2007, “DSGE Modeling at the Fund: Applications and Further Developments,” IMF Working Paper 07/200 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Boucekkine, R., 1995, “An Alternative Methodology for Solving Nonlinear Forward-Looking Models,” Journal of Economic Dynamics and Control, Vol. 19, No. 4, pp. 71134.

    • Search Google Scholar
    • Export Citation
  • Brayton, F., and P. Tinsley, 1996, “Guide to FRB/US: A Macroeconometric Model for the United States,” Federal Reserve Board, FEDS Working Paper 1996-42.

    • Search Google Scholar
    • Export Citation
  • Bryant, Ralph C., Peter Hooper, and Catherine L. Mann, 1993, Evaluating Policy Regimes: New Research in Empirical Macroeconomics (Washington, Brookings Institution).

    • Search Google Scholar
    • Export Citation
  • Coletti, D., B. Hunt, D. Rose, and R. Tetlow, 1996, “Bank of Canada’s New Quarterly Projection Model. Part 3, The Dynamic Model: QPM,” Technical Report No. 75 (Ottawa, Bank of Canada).

    • Search Google Scholar
    • Export Citation
  • Corsetti, G., and P. Pesenti, 2005, “International Dimensions of Optimal Monetary Policy,” Journal of Monetary Economics, Vol. 52, No. 2, pp. 281305.

    • Search Google Scholar
    • Export Citation
  • Edge, R., M. Kiley, and J.P. Laforte, 2006, “A Comparison of Forecast Performance Between Federal Reserve Forecasts, Simple Reduced-Form Models, and a DSGE Model,” (unpublished).

    • Search Google Scholar
    • Export Citation
  • Erceg, C., L. Guerrieri, and C. Gust, 2005, “SIGMA: A New Open Economy Model for Policy Analysis,” Working Paper, Federal Reserve Board.

    • Search Google Scholar
    • Export Citation
  • Hollinger, P., 1996, “The Stacked-Time Simulator in TROLL: A Robust Algorithm for Solving Forward-Looking Models,” paper presented at the Second International Conference on Computing in Economics and Finance, Geneva, Switzerland, June 2628, Needham, Massachusetts.

    • Search Google Scholar
    • Export Citation
  • Juillard, M., 1996, “DYNARE: A Program for the Resolution and Simulation of Dynamic Models With Forward Variables Through the Use of a Relaxation Algorithm,” CEPREMAP Working Paper No. 9602 (Paris, France, CEPREMAP).

    • Search Google Scholar
    • Export Citation
  • Juillard, M., O. Kamenik, D. Laxton, and M. Kumhof, 2007, “Optimal Price Setting and Inflation Inertia in a Rational Expectations Model,” Journal of Economic Dynamics and Control, (October), pp. 138.

    • Search Google Scholar
    • Export Citation
  • Juillard, M., P. Karam, D. Laxton, and P. Pesenti, 2005, “Welfare-Based Monetary Policy Rules in an Estimated DSGE Model of the US Economy,” Working Paper.

    • Search Google Scholar
    • Export Citation
  • Juillard, M., P. Karam, D. Laxton, and P. Pesenti, 2007, “Measures of Potential Output from an Estimated DSGE Model of the United States,” paper presented at a workshop on “Issues in Measuring Potential Output,” Ankara, Turkey, January 16.

    • Search Google Scholar
    • Export Citation
  • Juillard, M., D. Laxton, H. Pioro, and P. McAdam, 1998, “An Algorithm Competition: First-Order Techniques Versus Newton-Based Techniques,” Journal of Economic Dynamics and Control, Vol. 22, pp. 129118.

    • Search Google Scholar
    • Export Citation
  • Kumhof, M., and D. Laxton, 2007, “A Party Without a Hangover? On the Effects of U.S. Fiscal Deficits,” IMF Working Paper 07/202 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Laffargue, J.P., 1990, “Résolution d’un Modèle Macroéconomique Avec Anticipations Rationnelles,” Annales d’Economie et Statistique, Vol. 17, pp. 97119.

    • Search Google Scholar
    • Export Citation
  • Laxton, D., and P. Pesenti, 2003, “Monetary Rules for Small, Open, Emerging Economies,” Journal of Monetary Economics, Vol. 50, No. 5, pp. 110952.

    • Search Google Scholar
    • Export Citation
  • Laxton, D., and P. Pesenti, and others 1998, “MULTIMOD Mark III: The Core Dynamic and Steady-State Models,” IMF Occasional Paper No. 164 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Masson, P., S. Symansky, and G. Meredith, 1990, “MULTIMOD MARK II: A Revised and Extended Model,” IMF Occasional Paper No. 71 (Washington, International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Murchison, S., and A. Rennison, 2006, “TOTEM: The Bank of Canada’s New Quarterly Projection Model,” Bank of Canada Technical Report No. 97 (Ottawa, Bank of Canada).

    • Search Google Scholar
    • Export Citation
  • Obstfeld, M., and K. Rogoff, 1995, “Exchange Rate Dynamics Redux,” Journal of Political Economy, Vol. 103, pp. 62460.

  • Obstfeld, M., and K. Rogoff, 1996, Foundations of International Macroeconomics (Cambridge, Massachusetts, MIT Press).

  • Sims, C, 2002, “The Role of Models and Probabilities in the Monetary Policy Process,” Brookings Papers on Economic Activity, Vol. 2, pp. 140.

    • Search Google Scholar
    • Export Citation
  • Smets, F., and R Wouters, 2004, “Shocks and Frictions in Business Cycles: A Bayesian DSGE Approach,” ECB Working Paper (Frankfurt, European Central Bank and the National Bank of Belgium).

    • Search Google Scholar
    • Export Citation
  • Taylor, J., 1993, Macroeconomic Policy in a World Economy: From Econometric Design to Practical Operation (New York, W.W. Norton).

  • Taylor, J., 2000, “Low Inflation, Pass-Through and the Pricing Power of Firms,” European Economic Review, Vol. 44, pp. 1389408.

*

Douglas Laxton is assistant to the director of the IMF Research Department. The author thanks all the people involved in the development of the Global Economy Model (GEM) and the IMF’s other DSGE models. We owe a great debt to Ken Rogoff for inviting Paolo Pesenti to the IMF and focusing our attention on getting the job done quickly. We also thank Raghuram Rajan, Simon Johnson, and others for making further development of these types of models a priority and supporting outreach efforts in creating a DSGE modeling network. Finally, we acknowledge the invaluable help from Laura Leon in preparing the paper and Peter Hollinger, Michel Juillard, Dirk Muir, and Susanna Mursula in developing procedures used in model simulations. Thanks as well to our Econometric Support Team who have helped with training and getting the models used. Finally, we appreciate comments from Robert Flood, Peter Hollinger, and Turgut Kisinbay on an earlier draft.

1

See Laxton and Pesenti (2003) for the first version of the GEM and Botman and others (2007) for a summary of applications and extensions of the model.

2

See Botman and others (2006) for a description of GFM and Kumhof and Laxton (2007) for a description of GIMF. Both these models are based on the finite-planning horizon paradigm, which can give rise to strong non-Ricardian behavior—see Blanchard (1985).

3

See Botman and others (2007) for a summary of applications and extensions based on GEM, GFM, and GIMF.

4

For some recent examples of modern DSGE models in central banks, see Erceg, Guerrieri, and Gust (2005); Murchison and Rennison (2006); and Adolfson and others (forthcoming).

5

See Sims (2002) for a critique of the methods that were used to parameterize the earlier generation of macromodels. DYNARE is a user-friendly front end for MATLAB written by Michel Juillard and his colleagues at CEPREMAP. It includes a state-of-the-art collection of tools designed for estimation and obtaining either perfect foresight solutions on nonlinear models or local approximations around a steady-state solution.

6

Bayoumi, Laxton, and Pesenti (2004) show that higher levels of competition in both the labor market and goods market will permanently raise living standards and make the task of monetary policy easier to implement by increasing the sensitivity of inflation to market conditions.

7

The largest version of GEM and the one used in the simulations below also distinguishes between imports of investment goods and consumption goods. Differences in import intensities can have important implications for the responses of the economy in response to shocks that change the real exchange rate. For countries that import a substantial amount of capital, an appreciation in the real exchange rate can raise living standards on a sustainable basis by reducing the cost of capital.

8

The model used is our standard two-country training version of GEM, where the Home economy has been calibrated to be a small open economy. The code for the model and experiments is available in TROLL and can be obtained from the author’s website at www.douglaslaxton.org. People interested in accessing the code can request a trial version of TROLL from INTEX Solutions.

9

For a more detailed explanation of why single-equation fitting was abandoned in favor of calibration methods, see Coletti and others (1996).

11

For a discussion of the stacked-time algorithms developed at CEPREMAP, the Bank of Canada and INTEX solutions, see Laffargue (1990); Boucekkine (1995); Juillard (1996); Armstrong and others (1998); and Juillard and others (1998). For a discussion of the algorithms available in TROLL, see Hollinger (1996).

12

This approach was followed earlier in the development of the Bank of Canada’s Quarterly Projection Model and the IMF’s Mark 3 Version of MULTIMOD. See Black and others (1995) and Laxton and others (1998).

13

The model is consistent with a balanced growth path, but all real variables have been normalized in a way that removes growth.

14

Despite the extreme simplicity of these examples they can be used to explain a large number of the solution problems encountered in solving large models like the GEM. Some short training programs have been written in TROLL so that users can gain some experience solving smaller models before moving on to much larger problems.

15

In the base-case variant of GEM the monetary policy regime is assumed to be inflation targeting. It is well known that in an inflation-targeting regime the price level will have a unit root and will be subject to random drift. To remove the unit root from the model GEM has been transformed by expressing all nominal variables as a ratio of the CPI. However, after a dynamic solution of the model has been obtained it is then possible to construct the price level by using the measure of inflation from the model to cumulate the CPI-based price level from some initial condition drawn from history. Likewise, after the CPI-based price level has been created it is then possible to create all other nominal prices using the measures of relative prices from the model. The monetary policy reaction function in GEM is sufficiently general that it allows for targeting either the exchange rate or the price level.

16

See Armstrong and others (1998) and Juillard and others (1998) for a comparison of elementary first-order iterative methods and Newton-based algorithms.

17

Obviously, this debugging process will be extremely difficult in cases where researchers do not have access to robust solution methods because it will be much more difficult to distinguish between different types of errors when several errors are present at the same time—see Armstrong and others (1998) for a discussion of the difficulties associated with using first-order methods to solve nonlinear macromodels with significant lags in the monetary transmission mechanism.

18

As can be seen in the example above, implementing a Newton-based method requires software that can create and evaluate a Jacobian and then invert this matrix. For equation systems that are small the matrix-inversion step is trivial. However, for large models the matrix-inversion step can become extremely inefficient unless this step exploits the sparse structure of the Jacobian. Our experience thus far suggests that this will not result in a significant problem for solving for the nonstochastic steady state of GEM using the standard sparse-matrix code that is available in MATLAB or TROLL and we do not anticipate that this will be a problem for versions of the model that include many country blocks.

19

Several Newton-based algorithms, which have been developed over the years, have attempted to solve convergence failure problems automatically without the assistance of the user and now are available in both TROLL and MATLAB. However, for researchers interested in extending the GEM in nontrivial ways it is essential that they have some understanding of what the algorithms are doing. To facilitate this learning process we have developed some very simple TROLL programs with small examples that allow researchers to experiment with alternative starting values and solution techniques.

20

One of the example programs in TROLL is a three-equation model that consists of Equations (11)–(13). To understand the importance of how the equations are coded we provide a number of exercises including replacing the equation y12+y22x1=0withy1=x1y22 to show how strategies for choosing starting values depend on how the equations are coded. Interestingly, aside from kinks caused by the zero interest rate floor most of the nonlinearities in DSGE models can be well approximated by the training examples on these extremely simple examples.

21

A similar project is underway at the Bank of Canada and European Commission to incorporate stronger macrofinancial linkages into GEM and QUEST and at some point we will do a formal model comparison exercise that includes all three models.

22

We also have a project underway to build a small-scale, reduced-form, multicountry model that can be used for forecasting and risk assessments.

23

See Smets and Wouters (2004); Juillard and others (2005); Edge, Kiley, and Laforte (2006); Juillard and others (2007, 2008); and Adolfson and others (forthcoming).

  • Collapse
  • Expand
IMF Staff Papers, Volume 55, No. 2
Author:
International Monetary Fund. Research Dept.