What (Really) Accounts for the Fall in Hours After a Technology Shock?

Contributor Notes

Author’s E-Mail Address: nrebei@imf.org

The paper asks how state of the art DSGE models that account for the conditional response of hours following a positive neutral technology shock compare in a marginal likelihood race. To that end we construct and estimate several competing small-scale DSGE models that extend the standard real business cycle model. In particular, we identify from the literature six different hypotheses that generate the empirically observed decline in worked hours after a positive technology shock. These models alternatively exhibit (i) sticky prices; (ii) firm entry and exit with time to build; (iii) habit in consumption and costly adjustment of investment; (iv) persistence in the permanent technology shocks; (v) labor market friction with procyclical hiring costs; and (vi) Leontief production function with labor-saving technology shocks. In terms of model posterior probabilities, impulse responses, and autocorrelations, the model favored is the one that exhibits habit formation in consumption and investment adjustment costs. A robustness test shows that the sticky price model becomes as competitive as the habit formation and costly adjustment of investment model when sticky wages are included.

Abstract

The paper asks how state of the art DSGE models that account for the conditional response of hours following a positive neutral technology shock compare in a marginal likelihood race. To that end we construct and estimate several competing small-scale DSGE models that extend the standard real business cycle model. In particular, we identify from the literature six different hypotheses that generate the empirically observed decline in worked hours after a positive technology shock. These models alternatively exhibit (i) sticky prices; (ii) firm entry and exit with time to build; (iii) habit in consumption and costly adjustment of investment; (iv) persistence in the permanent technology shocks; (v) labor market friction with procyclical hiring costs; and (vi) Leontief production function with labor-saving technology shocks. In terms of model posterior probabilities, impulse responses, and autocorrelations, the model favored is the one that exhibits habit formation in consumption and investment adjustment costs. A robustness test shows that the sticky price model becomes as competitive as the habit formation and costly adjustment of investment model when sticky wages are included.

I. Introduction

As initially proposed by Galí (1999), the empirical evidence suggests that a positive technology shock leads to a decline in labor inputs. In addition, Francis and Ramey (2005), Liu and Phaneuf (2007), Whelan (2009), and Wang and Wen (2011) find that this result is robust to different specifications of the structural vector autoregression (SVAR) model and the measure of productivity used.1 On the other hand, the standard real business cycle model fails to account for this empirical regularity. This paper uses a full information method and quarterly U.S. data to estimate small-scale structural general equilibrium models exhibiting alternative theoretical specifications that could drive hours to drop following a technology shock. Our main focus is the identification of the theoretical assumption that is mostly accepted by the data using formal tests. A survey of the recent literature reveals the existence of six competing hypotheses which generate a decline in hours after a positive neutral technology shock: (i) sticky prices (à la Galí, 1999); (ii) firm entry and exit with time to build (à la Wang and Wen, 2011); (iii) habit in consumption and costly adjustment of investment (à la Francis and Ramey, 2005); (iv) persistence in the permanent technology shocks (à la Lindé, 2009); (v) labor market friction with procyclical hiring costs (à la Mandelman and Zanetti, 2010); and (vi) Leontief production function with labor-saving technology shocks (à la Francis and Ramey, 2005). To our knowledge, this is the first paper that uses a full information approach to identify the assumptions that are most likely responsible for generating the decline of worked hours following a positive technology shock.

Our main result is that—in terms of model posterior probabilities, impulse responses, and autocorrelations—introducing habit formation in consumption and investment adjustment costs in a small-scale dynamic and stochastic general equilibrium (DSGE) model significantly improves the model fit; at the same time the model can accurately account for the negative short-term correlation between output and labor conditional to a technology shock. Large values of the posterior odds ratio provide unambiguous evidence in favor of the model with habit in consumption and costly adjustment of investment as specified by Francis and Ramey (2005). Then comes the sticky price hypothesis as the second best alternative, followed by the introduction of labor frictions. The version of the model embedding persistent technology shocks and alternatively the Leontief production function specification are ranked fourth and fifth, respectively. Finally, the model encompassing entry-exit firms with time-to-build hypotheses is not supported by the data even when compared with the plain vanilla structure of the RBC model. This leads us to conclude that the observed decline in hours following a positive technology shock is most likely yielded by the combination of habit in consumption and costly adjustment of investment, which markedly dominates the other alternative assumptions. This result is robust to extending the different models with additional features such as nominal wage rigidity; however, it becomes less obvious to statistically discriminate between the sticky price and the habit formation models.

The rest of the paper is organized as follows. Section II revisits the stylized facts related to the response of endogenous variables to technology shocks, then describes the benchmark model. Section III describes the different versions of the model with particular attention paid to a limited information identification scheme. Section IV sets out the Bayesian Maximum likelihood estimation and examines the ability of the models to capture the main characteristics of the actual data. Section V checks the robustness of our results with respect to the initial reference model. Section VI concludes the paper.

II. Stylized facts and the RBC model

A. Stylized facts

For the sake of identifying technology shocks we adopt the commonly used long-run identification as applied by Galí (1999), Christiano, Eichenbaum, and Vigfusson (2004), and various others. According to Blanchard and Quah (1989), the additional restrictions needed to identify the structural shocks come from the long-run influence of the shocks of the model on the variables of the model. To do so let’s consider an estimated reduced form of a SVAR of order p,

Xt =D1Xt1 +D2Xt2 + … +DpXtp +ut,

where Xt = (χ1,t, …, Xn,t) is an n × 1 vector of n endogenous variables at date t, D1,…, Dp are n × n coefficient matrices, and ut is the n × 1 reduced-form error vector with Var(u) = Ω. Due to its recursive nature, one can transform this process into the moving-average representation

Xt =Σk=1ΓkAεtk,

where A is of dimension n × n and satisfies εt = A–1ut.

The long-run (infinite) horizon impulse response can be expressed as follows:

C=limiΓiA.

Setting n(n − 1)/2 elements (i, j) of this matrix C equal to zero implies that the respective shock j has no influence on the level of variable i in the long run.

We use an extended version of the SVAR estimated by Galí (1999) by including growth rates of productivity and labor in addition to growth rates of investment and real wages. Namely, we use seasonally adjusted quarterly series for the U.S. data for the period extending from 1960Q1 until 2008Q4 as provided by HAVER database. Real per capita investment corresponds to the sum of real capital formation and consumption of durable goods divided by the labor force. Real per capita output is computed as the sum of investment and consumption of services and non-durable goods divided by the labor force. Hours are total hours per quarter in the non-farm business sector divided by the labor force. Finally, real wages correspond to the average hourly wage divided by the deflator of the gross domestic product.

The impulse-response functions are reported in Figure 1. After a technology shock, output does not respond significantly on impact but increases permanently afterwards. Employment declines sharply on impact, but then rises in the longer run. These dynamic patterns of impulse responses are consistent with the findings of Galí (1999). Besides, the bottom panel of Figure 1 illustrates an increase in investment on impact, but then, investment subsequently rises toward its long-run steady state. An immediate increase in real wages is observed in addition to a moderate hump shape with a maximum response that occurs after two quarters.

Figure 1.
Figure 1.

SVAR IRFs following a technology shock

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

B. The benchmark RBC model

The economy consists of a representative household with an infinite planning horizon and a representative final good firm. The household offers labor, rents capital to the firms, and decides how much to invest in the capital stock given certain investment adjustment costs.2 The final good, which serves consumption and investment purposes, is produced by perfectly competitive firms using labor and capital as inputs.

1. Representative household’s and firm’s problems

At each period t, the representative household sells labor services, measured in hours worked, Lt, and rents the capital stock inherited from the previous period, Kt, to the monopolistic competitive firms that produce final goods. Labor services are sold at nominal wage rate, Wt, and Rt is the nominal rental rate of capital. As the owner of those firms, the household is also entitled to nominal dividend payments, Dt if any. The labor, capital, and dividend income is then used to consume and to invest in physical capital. Formally, the representative household’s optimization problem is:

max{Ct,Lt,Kt+1,It} E0Σt=0βtξtb[log(Ct)ξtlLt1+μ(1+μ)],

subject to:

PtCt +PtItWtLt +RtKt +Dt,(1)
Kt+1 =(1δ)Kt +[AI,tΓ(ItIt1)]It,(2)

where Ct is consumption, It is real investment, Pt is the final good price index, AI,t investment productivity shock, and Γ(⋅) is an incurred cost when investment is changed over time. We restrict the investment adjustment cost function, Γ, to satisfy the following properties: Γ(1) = 0, Γ’(1) = 0, and χ = Γ”(1) > 0.

Parameters β ∈(0,1), μ > 0, δ ∈ (0, 1), and χ > 0 are the subjective discount factor, the inverse of the Frisch intertemporal elasticity of substitution in labor supply, the depreciation rate of capital, and the investment adjustment cost parameter, respectively.

The first-order conditions associated with the optimal choices of Ct, Lt, Kt+1, and It are respectively given by:

λt =ξtbCt,(3)
λtwt =ξtl(Lt)μ,(4)
λt =βEtλt+1[rt+1 +(1δ)qt+1],(5)
λt =qtλt[Ai,tΓ(ItIt1)ItIt1Γ(ItIt1)] +βEtqt+1λt+1[(It+1It)2Γ(It+1It)],(6)

where λt is the nonnegative Lagrange multiplier associated with the budget constraint, rt = Rt/Pt, wt = Wt/Pt, and qt is the Tobin’s q.

Final good’s producers are perfectly competitive firms. the representative firm combines Kt units of capital, Lt units of labor, and aggregate technology, At, to produce Yt units of final goods according to a standard Cobb-Douglas production function

Yt =Ktα(AtLt)1α.(7)

To be consistent with the SVAR model, we assume the technology to have a permanent impact on real variables. Namely, the technology follows the stochastic process

log(At)=log(At1) +εA,t,(8)

where εA,t is a normally distributed serially uncorrelated shock with zero mean and finite standard deviation σA.

The first-order conditions of the firm’s problem with respect to Kt and Lt are given by:

rt =αYtKt,(9)
wt =(1α)YtLt.(10)

Finally, the resource constraint of the model is

Yt =Ct +It[1+Γ(ItIt1)].(11)

Since the the technology shock, At, is permanent, we need to scale a selection of variables with this shock in order to be able to solve for a non-stochastic steady state and compute the decision rules. Let Y˜t =Yt/At,C˜t =Ct/At,I˜t =It/At,K˜t+1 =Kt+1/At,w˜t =wt/At, and λ˜t =λtAt. Then, the model is solved by log-linearizing the resulting first-order conditions and market clearing conditions around the non-stochastic steady state.

2. Impulse-response functions

We calibrate the structural parameters of the model to values similar to those found in the literature. The baseline model is calibrated at a quarterly frequency. The subjective discount factor, β, is set to 0.985, which implies that the annual real interest rate is equal to 6 percent in the deterministic steady state. Capital share in production, α, has a standard value of 0.36, while the depreciation rate, δ, is chosen to be 0.025.

In order to give the model a better chance to match the observed impulse-response functions we allow a set of key parameters, Θ, to be estimated by using the minimum-distance (M-D) method. The advantage of this technique is that it focuses on only a sub-sample of structural shocks; namely, the technology shock.

We define Ψ as the vector of targeted conditional moments. Θ^ is the value that minimizes

[g(Θ;Ψ¯)]V¯1[g(Θ;Ψ¯)],

where g is a function that maps the distance between the empirical and theoretical impulse-response functions given a particular choice of Θ, and V is a diagonal matrix that contains the variances of the conditional moments. This weighting scheme puts the most weight on those responses that are estimated with the most precision. The minimization is achieved by setting to 0 the derivative of the latter expression with respect to Θ. Then, Θ^ is the solution of the following system of non-linear equations:

{g(Θ;Ψ¯)Θ|Θ=Θ^}V¯1[g(Θ^;Ψ¯)] = 0.

In the following, the estimation process is based on the information vector, ψ, consisting of the first 10 quarters impulse-response functions of labor productivity, labor, investment, and real wages to a technology shock. Under the calibration mentioned above for some deep parameters, we estimate the three remaining parameters that are mostly responsible for the dynamic impulse-response functions following a one standard-deviation technology shock. Hence, we estimate the inverse of Frisch elasticity, μ, the investment adjustment cost parameter, χ, and the autoregressive parameter of technology, ρA.

The closest impulse-response functions to the empirical ones are obtained for values of the inverse of Frisch elasticity, the investment adjustment cost parameter, and the standard deviation of technology shocks of 0.0758 (2.4177), 0.9766 (3.6546), and 0.0072 (0.0028), respectively.3 According to the estimated parameters, the impulse-response functions are reported in Figure 2. As expected, the standard model fails to deliver the contemporaneous negative response of total hours worked following a productivity shock. One can notice that the estimated labor supply elasticity is low. Assuming a higher elasticity, the model largely deviates from the SVAR’s responses; particularly, in terms of total hours’ response. As is standard in the literature, an increase in technology appreciates production and labor demand yielding a surge in wages. If we assume flexible labor supply, we ought to see a significant increase in hours following a positive technology shock. Consequently, it is impossible to revalidate the RBC model solely through a calibration exercise.

Figure 2.
Figure 2.

Impulse-response functions: SVAR versus the standard RBC model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

III. Alternative models

As suggested by the literature, we identify six extensions of the standard real business cycle model capable of generating a negative response of hours following a positive technology shock. In particular, we consider the simplest versions of the sticky price model, the entry-exit with sunk cost model, the habit-formation model, the persistent permanent technology shocks model, the labor friction model, and the Leontief production model.

A. The sticky price (SP) model

Galí (1999) interpreted the negative response of hours following a positive technology shock as evidence in favor of sticky-price models. The intuition is straightforward, if nominal rigidities prevent prices from falling as much as they would with flexible prices, aggregate demand remains stable or increases only modestly and firms may satisfy it by employing a smaller volume of inputs, which have become more productive. This result is sensitive to the monetary policy adjustments to technological shocks. In particular, if the central bank fully accommodates technological shocks, by sufficiently lowering interest rates, the fall in hours could be mitigated.

Formally, the final output, Yt, is defined as a composite of the differentiated finished goods, Yt(j), j ∈ (0, 1) denoting a type of finished good,

Yt =(01Yt(j)σ1σdj)σσ1,

where σ is the elasticity of substitution between differentiated finished goods.

Given prices Pt and Pt(j), the finished-good–producing firm j maximizes its profits choosing the production of finished goods, Yt(j). It solves the following problem

maxYt(j)Pt(01Yt(j)σ1σdj)σσ101Pt(j)Yt(j)dj.

Profit maximization leads to the following first-order condition for the demand of finished good j

Yt(j) =(Pt(j)Pt)σYt,(12)

where the price index of finished goods is

Pt =(01Pt(j)1σdj)11σ.

Producing a finished good j requires the use of labor, Lt (j), and capital, Kt(j). Firms utilize a constant returns-to-scale technology

Yt(j) =AtKt(j)αLt(j)1α.(13)

Firms are price-takers in the markets for inputs and monopolistic competitors in the markets for products. At each processing stage, nominal prices are chosen optimally in a randomly staggered fashion. At the beginning of each period, a fraction (1 – θ) of final-stage producers can change their prices.

The first–order conditions for this maximization problem for the finished-good producing firm j are

wt =(1α)(1φ)ζt(j)Yt(j)Lt(j),(14)
rt =α(1φ)ζt(j)Yt(j)Kt(j),(15)

where ζt(j) is firm j’s real marginal cost.

The first-order condition with respect to Pt(j) is

P˜t(j) = σσ1EtΣq=0(βθ)qλt+qλtζt(j)Yt+q(j)EtΣq=0(βθ)qλt+qλtYt+q(j)1Pt+q.(16)

At the symmetric equilibrium, the aggregate price of finished goods is

Pt =[θPt11σ +(1θ)P˜y,t1σ]11σ,(17)

where P˜t is the optimal or average price of finished-good-producing firms allowed to change their prices at time t.

Finally, we assume that the monetary policy is adopting a standard Taylor rule of the form:

Rt =(PtPt1)ρπYtρy.(18)

The M-D procedure yields a Calvo parameter for price-setting by intermediate goods firms equal to 0.8350(0.1576). This means that prices last on average five to six quarters. This is typically what other studies have found in stylized models with sticky prices.4 In addition, the closest theoretical impulse-response functions to empirical ones are obtained for point estimates of the inverse of Frisch elasticity, investment adjustment cost, and technology shock standard deviation equal to 1.5241 (2.0450), 0.9080 (2.0769), and 0.0081 (0.0018), respectively. Figure 3 shows that the model is doing a good job in matching empirical impulse responses. In particular, the immediate response of labor is negative and lasts about four quarters below the zero horizontal line. Although negative first period response is uniquely generated by price inertia, the persistence is attributable to the slow adjustment of investment. The impact of non-accommodative behavior of the monetary authority to positive technology shocks, in that interest rates do not fall in response to a positive output gap, is worth noting in the sense that it permits employment to decline in response to a positive technology shock.

Figure 3.
Figure 3.

Impulse-response functions: SVAR versus the SP model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

B. The entry-exit (EE) model

As suggested by Wang and Wen (2011), the entry-exit specification combined with time-to-build generates the observed drop in hours in the first period. More explicitly, time-to-build constrains firms to produce only one period after they enter the economy. This is crucial in terms of generating the desired response of labor following a positive technology shock. Namely, the short-term rigidity of aggregate output implies that production is predetermined and, to keep producing the same quantity, firms have to decrease demand for hours as a reaction to the shock.

The economy produces only one type of final good, yt. There are many identical final good producers in any period t, with each producing only a fixed quantity of the final good. Without loss of generality, this quantity is normalized to one. There is a fixed cost, Φ ∈ (0,1), to enter the final service sector. Entry and exit under perfect competition will determine the total mass (number) of final good producers, Ωt, in each period. The intermediate service for producing yt is xt. Producing one unit of the final good requires a units of xt, where a is a constant. Without loss of generality we can normalize a to one. Hence the production function is simply yt = xt. Let Pt and Ptx be the price of final product and input, respectively. A final good producer’s profit maximization problem is:

maxxtxtptxxt,

where ptx = Ptx/Pt.

This yields the demand for input:

xt = {1if 1ptx;0if 1<ptx.

Real profit in each period for each producer is given by:

Dt = {1ptxif 1ptx;0if 1<ptx.

In each period the aggregate supply of output, Yt, is determined by the number of firms and is equal to 0Ωytdi =Ωyt, and the aggregate demand for input is 0Ωxtdi =Ωxt.

In each period, there are potentially infinite entrants, which make the final good industry perfectly competitive. The one-time fixed entry cost, Φ, is paid in terms of the final good. After entry, each firm faces a stochastic probability of exit, ϑt ∈ (0, 1). We assume that firms must wait one period after entry before being able to start producing output due to time-to-build. The value of a firm in period t is then determined by:

Vt =βEtλt+1λtDt+1 +EtΣj=1βt+j[Πi=1j(1ϑt+i)]λt+j+1λt+jDt+j+1.(19)

We can also write this equation recursively as

Vt =βEtλt+1λt(Dt+1 +(1ϑt+1)Vt+1).(20)

Free entry then implies Vt = Φ. The evolution of the number of final good producers is

Ωt+1 =(1ϑt)Ωt +st,(21)

where st is the number of new entrants in period t.

Profit maximization implies the following first order conditions

wtptx =(1α)YtLt,(22)

and

rtptx =αYtKt.(23)

As in Wang and Wen (2011), to ensure stationarity of ϑt, we assume the probability of exit depends on the innovation of technology shocks, log(ϑt) = η log(εA,t).

As Figure 4 shows, although, the model is able to predict the right sign of labor and investment responses on impact, it is impossible to replicate the right response magnitude of most variables in the short run. The M-D estimation procedure delivers values of the key parameters as follows: μ = 0.3122 (0.5483), χ = 0.2191 (0.8649), η = –72.2791 (104.0056), and σA = 0.0054 (0.0020). By construction the model implies labor overshooting (by the opposite amount of the shock) since output is fixed during the period when the shock occurs. Wang and Wen (2011) calibrate the elasticity of the business failure rate with respect to technology innovations, η, to a value of -6. This value seems way far from the one obtained by the M-D procedure. It is worth noting that a highly negative η allows the model to generate positive reaction of real wages following a neutral technology shock. The reason is as follows: the more unlikely the failure is, the closer to one the relative price of inputs is and the input providers’ profits, Dt, shrink; hence, real wages could increase as labor productivity becomes high. Nevertheless, the model fails to account for the nonpersistent response of real wages which tend to overreact in the medium and long terms.

Figure 4.
Figure 4.

Impulse-response functions: SVAR versus the EE model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

C. The habit in consumption (HC) model

As pointed out by Francis and Ramey (2005) habit formation in consumption, combined with costly adjustment of investment, is able to generate an initial decline in labor following a positive technology shock. The intuition is straightforward—households would like to smooth out a productivity shock by increasing investment, but since it is very costly to undertake new investment, they instead choose to consume more leisure. In addition, the scale of the initial response of hours will obviously depend on the Frisch labor supply elasticity.

More explicitly, the functional form of period utility is given by:

log(CtψCt1)ξtlLt1+μ(1+μ)

where the parameter ψ is the degree of habit formation for a typical household. Therefore, the only equation that changes in the baseline model is the first order condition with respect to the choice of consumption that becomes

λt =1CtψCt1Et(βψCt+1ψCt).(24)

Figure 5 shows that the model’s impulse-response functions fit is quite good. In particular, the model responses of productivity, hours, and investment to a neutral technology shock lie very close to their counterparts from the estimated SVAR. The model has some difficulty in generating the mildly positive and persistent reaction of real wages observed in the data. Note that the results are obtained with the following point estimates of the essential parameters: μ = 0.1500 (5.9969), χ = 0.5207 (3.0233), ψ = 0.9052 (0.2195), and σA = 0.0062 (0.0024). Hence, an important degree of inertia in consumption is needed to yield the desired decline in hours. The magnitude of habit formation is consistent with the values of 0.80 (0.19) and 0.90 (1.83), reported by Fuhrer (2000); 0.63 (0.14), reported by Christiano, Eichenbaum, and Evans (2005); and 0.73, reported by Boldrin, Christiano, and Fisher (2001). Also, the parameter of capital adjustment cost is in the same range as in Christiano, Eichenbaum, and Evans (2005)k lies between the extreme values 0.91 and 3.24 depending on the version of the model.

Figure 5.
Figure 5.

Impulse-response functions: SVAR versus the HC model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

D. The persistent technology shock (PT) model

Lindé (2009) shows that an alternative version of RBC models can still account for the drop in hours following a positive permanent technology shock. Assuming permanent shocks by itself is insufficient; most importantly, a persistence to the technology process residual needs to be introduced. The latter is crucial for generating the right short-run sign of hours’ response. The underlying mechanism is that once the shock is persistent enough, households prefer to initially switch their resources toward consumption and leisure owing to the higher productivity of labor in the medium term. At the same time, investment spending decreases in the first periods.

The structure of the model is very similar to the benchmark case except for the specification of the exogenous process of technology. Specifically, the technology shock follows the exogenous process

log(εA,t) =ρεAlog(εA,t1) +eA,t,(25)

where ρεA is strictly bounded between –1 and 1 and the innovation eA,t is a normally distributed, serially uncorrelated shock with zero mean and finite standard deviation, σεA.

In this model the persistence of temporary technology shocks and the permanent technology assumption explain the decline of labor in the short term. The best fit of the impulse-response functions is obtained with the point estimates of μ, χ, ρεA, and σεA equal to 0.2029 (1.0945), 0.0377 (0.3037), 0.6436 (0.2612), and 0.0039 (0.0037), respectively. This fit demonstrates that labor supply still needs to be very elastic in order to obtain the sufficient increase in leisure in the short run. In the opposite case, higher persistence in the technology shock is needed to help matching the empirical responses.

Figure 6.
Figure 6.

Impulse-response functions: SVAR versus the PT model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

E. The labor friction (LF) model

Mandelman and Zanetti (2010) show that the augmented RBC model with labor frictions, as initially suggested by Blanchard and Galí (2010), could overturn the positive reaction of employment to a technology shock. Introducing search and matching frictions by itself is not sufficient to generate the desired impulse-response function sign of labor. In particular, the authors extend the functional form of the hiring cost to allow the latter to adjust to technology with the idea that, in principle, hiring costs may be either pro- or counter-cyclical. Then, the intuition behind the result of the model can be summarized as following. In a labor market characterized by a costly match between workers and firms, at equilibrium the marginal disutility from supplying an additional unit of labor deviates from the yielded marginal extra units of produced goods with the hiring costs the firm encounters when recruiting an extra worker. Furthermore, assuming that hiring costs co-move positively with productivity, a technology shock increases the marginal product of labor, as in the standard RBC model, but it also increases the cost of recruiting an extra worker. Hence, assuming a sufficiently positive elasticity of hiring cost with respect to technology, the increase of hiring costs effect could dominate, leading to a reduction in marginal rate of transformation. Thus, employment could react negatively to a positive technology shock.

In this version of the model, the variable defining labor, Lt, corresponds to the fraction of household members who are employed, which is given by the sum of the number of workers who survive an exogenous separation and the number of new hires during the same period, Ht. Therefore, one can define the fraction of total number of employees during a period t by

Lt =(1δL)Lt1+Ht,(26)

where δL ∈ (0, 1) is an exogenous job destruction rate. Note that at the beginning of period t there is a pool of jobless individuals who are available for hire, and whose size, Ut, is defined as follows

Ut = 1(1δL)Lt1.(27)

Hiring labor is costly. Hiring costs are given by GtHt; where Gt represents the cost per hire, which is independent of Ht and taken as given by the representative firm. While Gt is taken as given by the firm, it is an increasing function of labor market tightness, defined as xtHt/Ut. Formally, we assume

Gt =AtγBxtk,(28)

where γ ∈ ℝ corresponds to the elasticity of hiring cost with respect to technology, k is the elasticity of hiring cost with respect to labor market tightness, and B is a positive constant scale parameter. The parameter k is assumed to be positive, meaning that the cost of hiring is an increasing function of the ratio of vacancies to unemployment.

The first order condition with respect to the choice of labor demand by firms becomes

wt =(1α)YtLtAtγ(1+k)Bxtkβλt+1λtAt+1γ(1δL)Bxt+1k[kxt+1(1k)].(29)

Figure 7 confirms that the model does a very good job in matching the responses labor productivity, hours, investment and real wages since all responses are lying very close to their counterparts in the data and within the confidence regions at most horizons. Of particular interest is the fact that hours decline persistently in the model. In addition, the model is be able to generate a fairly reasonable response of real wages without a need for additional frictions such as nominal wage inertia. In order for the model to fit the impulse-response functions generated by the SVAR, we obtain the following point estimates for the key deep parameters: μ = 0.2418 (1.0465), χ = 0.2174 (0.9477), γ = 11.7148 (7.1913), and σA = 0.0076 (0.0018). One should mention that Mandelman and Zanetti estimate a smaller co-movement between hiring costs and technology of around 4, which is much lower that the point estimate delivered by the M-D procedure, although they still obtain a negative response of hours on impact of the same magnitude. The rationale is that because capital is introduced in our model, as opposed to Mandelman and Zanetti’s model, capital increases following a positive neutral technology shock which boosts labor productivity; and therefore, a higher co-movement between hiring costs and technology is necessary in order to still produce labor decline on impact.

Figure 7.
Figure 7.

Impulse-response functions: SVAR versus the LF model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

F. The Leontief production (LP) model

Francis and Ramey (2005) show that total hours may decrease following a labor productivity shock owing to a strong complementarity between capital and labor. Intuitively, once technology shocks are identified as labor-saving shocks and capital and labor are complements, a positive technology shock should be accompanied by a greater demand for capital. Since the capital stock is predetermined, a decline in hours becomes necessary in the short term; then, this decline resorbs in the next periods.

The benchmark RBC model is kept exactly the same except that we use a general CES production function where the productivity of labor is shocked. The functional form of the production is as follows

Yt =[αKtφ +(1α)(AtNt)φ]1φ,(30)

where φ ∈ (–∞, 1) determines the degree of substitutability of the inputs, 11φ. We introduce this function to be able to derive special cases from it. In particular, when φ → 1 we obtain the perfect substitutes production function; when φ → – ∞ we obtain the Leontief production function reflecting a perfect complementarity between inputs; and when φ → 0 we retrieve the Cobb-Douglas production function.

The first order conditions relative to the optimal choice of capital and labor become

rt =α(YtKt)1φ,(31)
wt =(1α)Atφ(YtLt)1φ.(32)

Firm’s first order conditions imply the following expression for the relative factor productivity

FKFL =rtKtwtLt =α1α(KtAtLt)φ,

and

(FKFL)At=φAtFKFL.

Notice that, following the increase in technology, the Leontief extreme case (φ tends to –∞) entails an exacerbated rise in capital productivity relative to labor productivity, pushing the firm to increase capital demand and to tighten labor demand.

During the M-D estimation procedure we include φ in the set of the parameters to be estimated. This gives an additional degree of freedom to the model to fit the empirical impulse-response functions. Although, we realize that, per se, φ will never converge to zero—the benchmark model with a Cobb-Douglas specification of the production function. In fact, the estimated value for φ is –29.1797 (7.1798), which exhibits a significant complementarity between inputs. The other estimated key parameters are: μ = 1.0628(1.3621), χ = 20.6932(14.4692), and σA = 0.0058 (0.0008). According to these point estimates, Figure 8 shows that the model performs well in matching the observed labor productivity and investment responses. On the other hand, although the sign of the responses are accurately generated, hours worked tend to exhibit a high degree of persistence as opposed to the observed response.

Figure 8.
Figure 8.

Impulse-response functions: SVAR versus the LP model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Dashed line: SVAR. Solid line: Model.

IV. Full information estimation and model comparison

The models are estimated using Bayesian techniques that update prior distributions for the deep parameters, which are defined according to a reasonable calibration. The estimation is done using recursive simulation methods, more specifically the Metropolis-Hastings algorithm, which has been applied to estimate similar dynamic stochastic general-equilibrium models in the literature, such as Schorfheide (2000) and Smets and Wouters (2003). Let YT be a set of observable data while θ denotes the set of parameters to be estimated. Once the model is log-linearized and solved, its state-space representation can be derived and the likelihood function, L(θ | YT), can be evaluated using the Kalman filter. The Bayesian approach places a prior distribution p(θ) on parameters and updates the prior through the likelihood function.

Bayes’ Theorem provides the posterior distribution of θ:

p(θ |YT) =L(θ |YT)p(θ)L(θ |YT)p(θ)dθ

Markov Chain Monte Carlo methods are used to generate the draws from the posterior distribution. Based on the posterior draws, we can make inference on the parameters. The marginal data density, which assesses the overall fit of the model, is given by:5

p(YT) =L(θ |YT)p(θ)dθ.

To identify the shock processes during the estimation, we need to use at most the same number of actual series. The four shocks in the different versions of the model are: a preference shock, a labor supply shock, a neutral technology shock, and an investment-specific technology shock. In addition, measurement errors in each of the observable variables are added.

A. Priors and data

We consider the same variables used in the SVAR estimation (i.e., growth rates of the real per capita output, the real per capita investment, per capita hours worked, and real wages) as observed variables. We use seasonally adjusted quarterly series for the U.S. data for the period extending from 1960Q1 until 2008Q4.

A first attempt to estimate the model showed that the estimation procedure was unable to provide plausible estimates for some structural parameters. As in other similar studies, we calibrated these parameters in order to match important stylized facts in the data. The capital depreciation parameter δ is set at 0.025 to match an average annual rate of capital destruction of 10 percent. The weight of leisure in utility, ξl, is adjusted in each iteration so that the fraction of hours worked in the deterministic steady state is equal to 0.25. The remaining parameters are estimated. We use the Beta distribution for parameters that take sensible values between zero and one, the gamma distribution for coefficients restricted to be positive, and the inverse gamma distribution for the shock standard deviations. The prior means of the discount factor and the inverse of Frisch elasticity are set equal to 0.985 and 0.75, respectively. The latter reconcile the Real Business cycle literature which often models a relatively high Frisch elasticity (e.g., King, Plosser, and Rebelo, 1988; Prescott, 1986), and some micro-data based studies which argue for small values (e.g., Chetty, 2009; Pistaferri, 2003). As commonly assumed in the literature, the share of capital in the production function has a prior average value of 0.35 with a standard deviation of 0.10.

We assume all the autoregressive parameters are Beta distributed, with mean 0.5 and standard deviation 0.15. Finally, we assume the standard deviations of all shocks have a Inverted-gamma distribution with a mean of 0.01 and a standard deviation equal to 4.

Table 1 also reports the priors of the model-specific parameters. The median of the price stickiness parameter in the SP model is set so that the average length between price adjustments is two quarters, consistent with the findings in the microeconometric studies (e.g., Bils and Klenow, 2004), but the standard error allows for variation between less than one quarter and more than a year. The prior on the mean of the coefficient in the monetary policy reaction function is standard: a relatively high long-term coefficient on inflation of 1.5 helps to guarantee a unique solution path when solving the model.6

Table 1.

Prior distributions of parameters

article image

Considering the model with entry and exit with time to build (the EE model), we assume median values as calibrated in the paper by Wang and Wen (2011). In particular, they set the fixed cost of entry, Φ, to 0.1 (which implies the steady-state share of entry cost to GDP is 0.1 × θ). In addition, the authors argue that, based on the U.S. data, a 1 percent increase in the aggregate technology reduces the business failure rate by about 6 percent in the long run; hence, we set the average value of η to – 6. This negative elasticity implies that a positive neutral technology shock reduces the probability of exit owing to improved production efficiency. Finally, we adopt an average value of business failure rate, ϑ, of 0.1 as suggested by Ghironi and Melitz (2005) and Wang and Wen (2011).

The only additional parameter in the HC model turns out to be the degree of habit formation. The average prior of the parameter ψ is assumed to be equal to 0.6, which is consistent with the estimations reported by Christiano, Eichenbaum, and Evans (2005), among many others.

Similarly to Lindé (2009), we choose a prior mean for the autocorrelation of the temporary innovation in technology as specified in the PT model of 0.50. While the prior distribution of the technology shock’s standard deviation is kept similar to the other shocks of the model.

In the LF model we simply adopt the same priors considered in Mandelman and Zanetti (2010). In particular, we assume the parameter k that is loosely centered around 1, which also corresponds to the calibrated value in Blanchard and Galí (2010). When setting the average prior elasticity of technology shocks to hiring costs, γ, we assume a positive mean as reported in the estimation by Mandelman and Zanetti. Hence, we center the prior distribution of γ around 4 but we allow for a wide range of possible values with a standard deviation equal to 2.

Finally, the degree of substitutability of the inputs in the CES production function of the LP model, φ, is assumed to have a value that converges to –∞, which corresponds to the Leontief specification. Therefore, we end up with estimating the same set of parameters as in the baseline scenario.

B. Estimation results and model comparison

We summarize the posterior distribution of the parameters in Table 2, where we report the median of each parameter as well as its 5 and 95 percentile values. Regarding the common parameters several remarks are worth noting. Depending on the model of interest, the parameter governing investment adjustments costs, χ, has a posterior median ranging from 0.0765 to 4.9865 in the (EE) and (LP) models, respectively. Although the estimation of this parameter appears to be affected by the choice of the model, it is still consistent with most of the DSGE literature, which finds values of this parameter ranging anywhere from fairly above 0 (e.g., Sims, 2011) to close to 10 (e.g., Fernández-Villaverde, 2010).7 The parameter μ tends to converge to median values that are below the prior mean, and this means that the Frisch elasticity is high although it still lies within the range of the findings of micro- and macro-econometrics. As calibrated in many RBC models in the literature, there is strong persistence in the stationary components of neutral technology for all the models except the (PT) model, where technology shocks are assumed to follow an AR(1) process. The estimated persistence of the investment-specific shock is relatively weak except in the model exhibiting trending technology.

Table 2.

Parameter Estimation Results

article image

The posterior median of the Calvo parameter for price adjustment, θ, is 0.8948 (six-quarter pricing cycle, on average) which is in line with the findings by Smets and Wouters (2003) and Christiano, Eichenbaum, and Evans (2005). The parameters of the Taylor rule are of standard levels as extensively reported in the literature, with the only significant difference here is that the Fed is less responsive to output gaps. Further, parameters specific to the EE model show higher entry costs and the estimation procedure seems to lack information to capture the steady state level of the probability of failure. The data set is clearly not sufficient to deliver different prior and posterior distributions. Looking at the HC model, one can notice that the data push the parameter capturing the degree of habit formation away from the prior toward a posterior median of 0.8348. Assuming persistence in the permanent technology shock—the PT model—the estimate of the median autocorrelation of the transitory component is 0.6515, which lies within the results reported by Lindé (2009) and Sims (2011). On the other hand, the elasticities characterizing the propagation mechanism in the labor friction model (the LF model) are different from the estimates found in Mandelman and Zanetti (2010); although, the same estimation methodology is adopted. Namely, the data push the parameters k (g) toward lower (higher) levels than the prior value, which may be linked to the introduction of capital into the model.

From the results in Table 2, we see that the model embedding habit formation and investment adjustment costs is preferable under the assumed priors. The Bayes factor is largely in favor of this model regardless of the alternative specification. The marginal data density of the SP model—ranked second—is 10.66 smaller on a log-scale, which translates into a posterior odds ratio largely in favor of the HC model. Then comes the LF model as the third best specification followed by PT model, than, by the LP model. The fit of the EE model, however, is much worse than that of the other models, as reflected by their posterior probabilities. Furthermore, extending the standard RBC model with an entry-exit framework deteriorates its forecasting performance reflected by a significantly lower marginal posterior likelihood.

This provides strong empirical evidence supporting the importance of habit formation and investment adjustment costs in comparison with the alternative assumptions.

C. Impulse-response functions

For each extension of the RBC model and based on the corresponding estimated parameters, Figure 9 reports the impulse-response functions—median, 5th, and 95th percentiles in brackets— following a 1 percent unexpected positive technology shock. As expected, the response of labor is negative on impact and gradually switches sign in accordance with the relevant estimated degree of persistence for each alternative model. Note, however, that with persistent technology shocks the model delivers a short term positive reaction of labor. This is consistent with the fact that introducing persistence by itself is not sufficient to generate the expected initial negative reaction of labor to a positive technology shock. For reasons which we have already explained, combining an I(1) process of technology with the shock persistence helps to obtain the conditional correlation between productivity and labor. Remember that the estimated persistence of technology shocks, ρεA, is equal to 0.6436 in the limited information estimation, which is similar to the full information estimation (0.6515). On the other hand, the investment adjustment cost is relatively high following a Bayesian estimation, which produces a lower adjustment in investment in the short-run. This prevents households from sufficiently increasing their consumption and leisure on impact and hours worked marginally declines.

Figure 9.
Figure 9.

IRFs of the Alternative Estimated Models

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Solid line: IRFs. Shaded: 90 per cent confidence interval.

The same is observed when considering the hours’ impulse-response function in the case of the PT model, although to a lesser extent. In particular, hours mildly decline on impact followed by a sign switch. The rationale for this result consists in the fact that the estimated monetary reaction function exhibits a significantly positive output gap coefficient. The impact of accommodative behavior of the monetary authority to positive technology shocks leads to higher interest rates, which mitigate the impulse-response functions to a positive technology shock.

Finally, the other models generate the desired reaction of hours worked in addition to a reasonable persistence in the fluctuations of the endogenous variables, at least visually.

D. Autocorrelation functions

Figure 10 shows the autocorrelations of the four observable variables. All the alternative models perform successfully in generating the positive autocorrelations of output observed in the data except the benchmark and (LF) models. As suggested by Cogley and Nason (1995), the standard model framework lacks endogenous propagation mechanisms and misses most variables’ short run dynamics. Surprisingly, adding labor frictions fails to generate additional persistence. If we look closely at the LF model autocorrelations, we find that output, labor, investment, and real wages display negative autocorrelations as opposed to the observed ones. All the other models perform successfully, at least in generating the significantly positive intertemporal real investment autocorrelation observed in the data, suggesting that assuming adjustment costs helps to capture this dimension. However, as evidence from Figure 10 confirms, the main shortcoming of the different models, including the preferred ones, is the difficulty of replicating the autocorrelations of real wages observed in the data. This is not a very surprising result because no labor market frictions are assumed in most of the models under investigation.

Figure 10.
Figure 10.

Autocorrelations of the Alternative Models

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Solid line: observed autocorrelations.Dashed line: model autocorrelations.Shaded: 90 per cent confidence interval.

More formally, we adopt measures of deviations of the alternative models, Mi, vis-à-vis the Bayesian SVAR, M0, as suggested by Chang, Gomes, and Schorfheide (2002). In particular, we use the structural Bayes estimates for a set of moments to calculate the expected loss for each version of the model. Let ρ denote the population characteristics conditional on the SVAR. For each structural model Mi, we examine the expected loss associated with the deviation of actual model predictions, ρ^i, from the posterior distribution of ρ. The posterior expected loss for autocorrelations corresponds to R (ρ^i | Y,M0) = ∫L (ρ, ρ^i)(ρ | Y,M0), where L (ρ, ρ^i) is the loss function. The authors propose two alternative loss functions. The first corresponds to a quadratic loss function, Lq-risk. The second loss function, Lχ2-risk, penalizes predictions that lie far back on the tails of the overall posterior distribution.8 Table 3 shows the results and confirms those reported in Figure 10. The posterior expected Lq loss for the output autocorrelations ranges from 0.0041 (in the HC model) to 0.0304 (in the PT model), consistent with the the overall ranking based on the marginal likelihood values for each version of the model. In addition to the quadratic losses we report the expected Lχ2 losses. A value close to one indicates that the model prediction lies far back on the tails of the posterior density obtained from the SVAR. Again, the combining habit in consumption and investment adjustment costs helps to dramatically reduce the statistic from 0.9975 in the benchmark model to only 0.5200. Note also that based on the the joint Lq statistic the HC model does quite well matching the dynamics of labor and investment although it is sometimes slightly outperformed by other models. Nevertheless, the relative ranking of the models is not affected.

Table 3.

Autocorrelation statistics

article image
The posterior expected losses are calculated based on the distribution of the SVAR first four autocorrelation of the four observable variables. In particular, the autocorrelations of output growth rates for the lags 1 to 4 lie in the 95–percent intervals [0.2769,0.5424], [0.1123,0.3952], [–0.0039,0.2600], and [–0.0444,0.1978], respectively. Those of labor growth rates are in [0.2463,0.4996], [–0.0193,0.2718], [–0.0681,0.1995], and [–0.0688,0.1480]. Those of investment growth rates are in [0.2319,0.5120], [0.1446,0.4271], [–0.0023,0.2583], and [–0.0307,0.1971]. Finally, those of the real wages growth are in [0.3495,0.6388], [0.2078,0.5293], [0.0859,0.3998], and [0.0316,0.3070].

The right-most column of Figure 10 reports the autocorrelations of the real wage inflation predicted by the different models. One can conclude that most models, including the dominating ones with respect to fitting the data, fail in generating the significantly positive autocorrelation of real wage inflation. The only two exceptions are the (PT) and the (LP) models.9 This is consistent with the results of the limited information estimation suggesting that most models fail in capturing the dynamics of the real wage inflation following a technology shock. In this sense, Ambler, Guay, and Phaneuf (2012) argue that a model with staggered wage-setting along with costly labor adjustments is able to explain the observed patterns of the real wage inflation autocorrelations. As a robustness test to the relative performances of the alternative models, we propose to introduce in the following section wage stickiness in each model considered by the paper.

V. Robustness

We now discuss the sensitivity of our results to changes in the assumptions underlying the baseline model. More specifically, we extend the model by allowing for nominal wage stickiness as arising from Calvo-type staggering contracts. In this model, each household is the monopolistic supplier of a differentiated type of labor input, and equilibrium effort intensity varies across households. This is expected to improve the dynamics of wage inflation in the different specifications of the competing models. Then, we re-estimate the set of structural parameters of each model augmented by the parameter θw capturing the probability of keeping wages unchanged at the beginning of each quarter.

Table 4 reveals that the two models with habit formation in consumption and sticky prices clearly outperform their counterparts although it is hard now to unambiguously discriminate between these two versions of the model. In particular, the posterior odds ratio test when comparing the SP model with the HC model is equal to 3.29.10 The results also suggest that incorporating staggered wage contracts à la Calvo offers significant improvements in terms of the model fitness to the data in the US economy for the SP and EE models. The differences in the log marginal likelihood are less important when the HC and PT models are considered. However, for the alternative models—the LF and LP models—wage stickiness tends to imply a deterioration in replicating the data characteristics.

Table 4.

Estimation results with sticky wages

article image
We use the same prior distribution for the degree of wage stickiness as the one for the degree of price rigidity. Namely, we consider a Beta distribution with mean 0.50 and standard deviation 0.15.

Looking at the moments generated by the SP and HC models, as reported in Figure 11, is useful to understand the result of the odds ratio test. Particularly, the reason why the SP model’s marginal likelihood drastically improves following the introduction of sticky wages is that real wages become more persistent as observed in the data. The same happens in the HC model but at the cost of deteriorating the autocorrelation of output and investment (see dotted lines with diamonds in the lower panel of Figure 11). Hence, the full information estimation procedure pushes the parameter capturing the degree of price rigidity towards zero under the HC model.11

Figure 11.
Figure 11.

Autocorrelations: SP versus HC model

Citation: IMF Working Papers 2012, 211; 10.5089/9781475505610.001.A001

Solid line: SVAR. Dashed line with circles: Estimated parameters. Dotted line with diamonds: θw = 0.50.

VI. Conclusion

In recent years, many macroeconomists have been attracted by the fact that the short term correlation between output and labor is negative as the economy responds to exogenous variations in technology. This result turns out to be consistent with a class of models with alternative hypothesis, which all seem reasonable a priori. In that regard, a survey of the literature reveals the existence of six successful classes of DSGE models which encompass sticky prices, firm entry and exit with time to build, the combination of habit in consumption along with investment adjustment costs, permanent technology shocks, labor market frictions with hiring costs, and the Leontief production function with labor-saving technology shocks.

In order to discriminate between these competing models we assess each relative model fit to its other rivals with different assumptions. The model favored in the space of competing models is the one that exhibits habit formation in consumption and investment adjustment costs. This model markedly succeeds in capturing the important dynamics in the data in parallel to correctly predicting the impulse-response functions of the endogenous variables. Furthermore, when sticky wages are added to the models’ specification, the main results remains broadly unchanged; however, it becomes impossible to unambiguously discriminate between the sticky price and the habit formation models.

References

  • Altig, D., L. J. Christiano, M. Eichenbaumc, and J. Lindé, 2011, “Firm-specific capital, nominal rigidities and the business cycle,” Review of Economic Dynamics, Vol. 14, pp. 225247.

    • Search Google Scholar
    • Export Citation
  • Ambler, S., A. Guay, and L. Phaneuf, 2012, “Endogenous business cycle propagation and the persistence problem: The role of labor-market frictions,” Journal of Economic Dynamics and Control, Vol. 36, pp. 4762.

    • Search Google Scholar
    • Export Citation
  • Bils, M., and P. J. Klenow, 2004, “Some Evidence on the Importance of Sticky Prices,” Journal of Political Economy, Vol. 112, pp. 947985.

    • Search Google Scholar
    • Export Citation
  • Blanchard, J. O., and J. Galí, 2010, “The dynamic effects of aggregate demand and supply disturbances,” American Economic Journal: Macroeconomics, Vol. 2, pp. 130.

    • Search Google Scholar
    • Export Citation
  • Blanchard, J. O., and D. Quah, 1989, “The dynamic effects of aggregate demand and supply disturbances,” American Economic Review, Vol. 79, pp. 655673.

    • Search Google Scholar
    • Export Citation
  • Boldrin, M., L. J. Christiano, and J. Fisher, 2001, “Habit Persistence, Asset Returns and the Business Cycle,” American Economic Review, Vol. 91, pp. 149166.

    • Search Google Scholar
    • Export Citation
  • Carlstrom, C. T., and T. S. Fuerst, 1997, “Agencycosts, networth, and business fluctuations: A computable generale quilibrium analysis,” American Economic Review, Vol. 87, pp. 893910.

    • Search Google Scholar
    • Export Citation
  • Chang, Y., J. F. Gomes, and F. Schorfheide, 2002, “Learning-By-Doing as a Propagation Mechanism,” American Economic Review, Vol. 92, pp. 14981520.

    • Search Google Scholar
    • Export Citation
  • Chetty, R., 2009, “Bounds on Elasticities with Optimization Frictions: A Synthesis of Micro and Macro Evidence on Labor Supply,” NBER Working Paper No. 15616.

    • Search Google Scholar
    • Export Citation
  • Christiano, L. J., M. Eichenbaum, and C. L. Evans, 2005, “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” Journal of Political Economy, Vol. 113, pp. 145.

    • Search Google Scholar
    • Export Citation
  • Christiano, L. J., M. Eichenbaum, and R. Vigfusson, 2004, “What happens after a technology shock?Mimeo, Northwestern University.

  • Cogley, T., and J. M. Nason, 1995, “Output dynamics in real-business-cycle models,” American Economic Review, Vol. 85, pp. 492511.

    • Search Google Scholar
    • Export Citation
  • Fernald, J., 2007, “Trend Breaks, Long-Run Restrictions, and Contractionary Technology Improvements,” Journal of Monetary Economics, Vol. 54, pp. 24672485.

    • Search Google Scholar
    • Export Citation
  • Fernández-Villaverde, J., 2010, “The econometrics of DSGE models,” Journal of the Spanish Economic Association, Vol. 1, pp. 349.

    • Search Google Scholar
    • Export Citation
  • Fisher, J. D. M., 2006, “The dynamic effects of neutral and investment-specific technology shocks,” Journal of Political Economy, Vol. 114, pp. 413451.

    • Search Google Scholar
    • Export Citation
  • Francis, N., and V. A. Ramey, 2005, “Is the Technology-driven Real Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations Revisited,” Journal of Monetary Economics, Vol. 50, pp. 13791399.

    • Search Google Scholar
    • Export Citation
  • Fuhrer, J., 2000, “Habit Formation in Consumption and its Implications for Monetary-Policy Models,” American Economic Review, Vol. 90, pp. 367390.

    • Search Google Scholar
    • Export Citation
  • Galí, J., 1999, “Technology employment and the business cycle: Do technology shocks explain aggregate fluctuations?American Economic Review, Vol. 89, pp. 249271.

    • Search Google Scholar
    • Export Citation
  • Geweke, J. F., 1999, “Using Simulation Methods for Bayesian Econometric Models: Inference, Development and Communication,” Econometric Reviews, Vol. 18, pp. 1126.

    • Search Google Scholar
    • Export Citation
  • Ghironi, F., and M. Melitz, 2005, “International Trade and Macroeconomic Dynamics with Heterogeneous Firms,” Quarterly Journal of Economics, Vol. 120, pp. 865915.

    • Search Google Scholar
    • Export Citation
  • Greenwood, J., Z. Hercowitz, and P. Krusell, 2000, “The role of investment-specific technological change in the business cycle,” European Economic Review, Vol. 44, pp. 91115.

    • Search Google Scholar
    • Export Citation
  • Jeffereys, H., 1961, Theory of Probability (Oxford University Press).

  • King, R. G., C. I. Plosser, and S. T. Rebelo, 1988, “Production, growth and business cycles: I. The basic neoclassical model,” Journal of Monetary Economics, Vol. 88, pp. 195232.

    • Search Google Scholar
    • Export Citation
  • Lindé, J., 2009, “The effects of permanent technology shocks on hours: Can the RBC-model fit the VAR evidence?Journal of Economic Dynamics and Control, Vol. 33, pp. 597613.

    • Search Google Scholar
    • Export Citation
  • Liu, Z., and L. Phaneuf, 2007, “Technology Shocks and Labor Market Dynamics: Some Evidence and Theory,” Journal of Monetary Economics, Vol. 54, pp. 25342553.

    • Search Google Scholar
    • Export Citation
  • Mandelman, F. S., and F. Zanetti, 2010, “Technology shocks, employment and labour market frictions,” Bank of England Working Paper No. 390.

    • Search Google Scholar
    • Export Citation
  • Pistaferri, L., 2003, “Anticipated and Unanticipated Wage Changes, Wage Risk, and Intertemporal Labor Supply,” Journal of Labor Economics, Vol. 21, pp. 729754.

    • Search Google Scholar
    • Export Citation
  • Prescott, E. C., 1986, “Theory Ahead of Business Cycle Measurement,” Carnegie-Rochester Conference Series on Public Policy, Vol. 11, pp. 1144.

    • Search Google Scholar
    • Export Citation
  • Schorfheide, F., 2000, “Function-Based Evaluation of DSGE Models,” Journal of Applied Econometrics, Vol. 15, pp. 645670.

  • Sims, E. R., 2011, “Permanent and Transitory Technology Shocks and the Behavior of Hours: a Challenge for DSGE Models,University of Notre Dame, manuscript.

    • Search Google Scholar
    • Export Citation
  • Smets, F., and R. Wouters, 2003, “An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area,” Journal of the European Economic Association, Vol. 1, pp. 11231175.

    • Search Google Scholar
    • Export Citation
  • Wang, P., and Y. Wen, 2011, “Understanding the Effects of Technology Shocks,” Review of Economic Dynamics, Vol. 14, pp. 705724.

  • Whelan, K. T., 2009, “Technology Shocks and Hours Worked: Checking for Robust Conclusions,” Journal of Macroeconomics, Vol. 31, pp. 231239.

    • Search Google Scholar
    • Export Citation
+

The author wishes to thank Hafedh Bouakez, Roberto Fattal, Gaston Gelos, Dalia Hakura, Mohamad Elhage, Mico Loretan, Prakash Loungani, and Louis Phaneuf for helpful comments.

1

Whether hours rise or fall following a positive technology shock depends on whether hours enter the SVAR in log-levels or log-differences. However, Fernald (2007) shows that if one allows for plausible trend breaks in labor productivity, then hours worked fall on impact of a positive technology shock, regardless of whether hours are measured in differences or in levels.

2

The objective from assuming costly investment adjustment costs in the model is twofold. First, it allows generation of plausible investment dynamics as suggested by Carlstrom and Fuerst (1997). Second, it justifies the introduction of an additional shock to investment technology that seems to be important in the business cycles, as suggested by Greenwood, Hercowitz, and Krusell (2000) and Fisher (2006).

3

Numbers between parentheses correspond to standard deviations.

4

Although, we do not estimate the parameter capturing the degree of inflation stabilization in the Taylor rule, ρπ, the results are robust to values ranging from 1 to 2 (the point estimates are reported for ρπ = 1.5 and ρy = 0)

5

The marginal data densities are approximated using the harmonic mean estimator that is proposed by Geweke (1999).

6

In addition, the elasticity of substitution between intermediate goods, σ, is set to 8 implying a markup of 14 percent in the deterministic steady state.

7

This result could also be related to the set of structural shocks in action in the model. Altig and others (2011) show that in the absence of monetary shocks in a model, the parameter governing investment adjustment costs tends to be relatively low. This pattern seems to come out in the present estimation exercise.

8

Chang, Gomes, and Schorfheide (2002) provide a detailed procedure for calculating the two loss functions Lq and Lχ2.

9

The model with labor friction does well in matching the first order autocorrelation of real wages growth, however, it fails in generating the significantly positive autocorrelations at higher orders.

10

Assuming πi,T is the marginal data likelihood of the model i ∈ {SP, HC}, Jeffereys (1961) suggests to assess the odds ratio using the following rule of thumb: if 1 < πSP,TπHC,T < 3 there is only weak evidence for SP. If 3 < πSP,TπHC,T < 12 there is weak to moderate evidence for SP. If 12 < πSP,TπHC,T < 148 there is moderate to strong evidence for SP. Finally, if πSP,TπHC,T > 148, there is decisive evidence for SP.

11

The estimate of the posterior average of the degree of wage rigidity, θw, is equal to 0.31, which reflects a frequency of adjusting wages between one and two quarters in average.

What (Really) Accounts for the Fall in Hours After a Technology Shock?
Author: Mr. Nooman Rebei