“In any case, that is what economists do. We are storytellers, operating much of the time in worlds of make believe.”
Robert E. Lucas, Jr.1
Mark Aguiar and Gita Gopinath. Emerging market business cycles: The cycle is the trend. Journal of political Economy, 115(1):69–102, 2007.
Laurence M. Ball, Joao Tovar, and Prakash Loungani. Do Forecasters Believe in Okuns Law? An Assessment of Unemployment and Output Forecasts. IMF Working Papers 14/24, International Monetary Fund, February 2014.
Paul Beaudry and Tim Willems. On the macroeconomic consequences of over-optimism. Technical report, National Bureau of Economic Research, 2018.
Olivier J Blanchard and Daniel Leigh. Growth forecast errors and fiscal multipliers. American Economic Review, 103(3):117–20, 2013.
Valerie Cerra and Sweta Chaman Saxena. Growth dynamics: the myth of economic recovery. American Economic Review, 98(1):439–57, 2008.
John B Copas. Regression, prediction and shrinkage. Journal of the Royal Statistical Society. Series B (Methodological), pages 311–354, 1983.
Victor De Miguel, Lorenzo Garlappi, and Raman Uppal. Optimal versus naive diversification: How inefficient is the 1/n portfolio strategy? The Review of Financial studies, 22(5):1915–1953, 2007.
Francis X Diebold and Robert S Mariano. Comparing predictive accuracy. Journal of Business & economic statistics, 20(1):134–144, 2002.
Theo S Eicher, David J Kuenzel, Chris Papageorgiou, and Charis Christofides. Forecasts in times of crises. IMF Working Papers 18/48, International Monetary Fund, 2018.
Jeffrey A. Frankel and Jesse Schreger. Bias in Official Fiscal Forecasts: Can Private Forecasts Help? NBER Working Papers 22349, National Bureau of Economic Research, Inc, June 2016.
Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics New York, 2009.
Hans Genberg, Andrew Martinez, and Michael Salemi. The imf/weo forecast process. IEO Background Paper No. BP/14/03 (Washington: Independent Evaluation Office of the IMF), 2014.
Gerd Gigerenzer and Henry Brighton. Homo heuristicus: Why biased minds make better inferences. Topics in cognitive science, 1(1):107–143, 2009.
Joao Tovar Jalles, Iskander Karibzhanov, and Prakash Loungani. Cross-country evidence on the quality of private sector fiscal forecasts. Journal of Macroeconomics, 45(C):186–201, 2015.
Jin-Kyu Jung, Manasa Patnam, and Anna Ter-Martirosyan. An algorithmic crystal ball: Forecasts based on machine learning. 2018.
Kajal Lahiri, Gultekin Isiklar, and Prakash Loungani. How quickly do forecasters incorporate news? Evidence from cross-country surveys. Journal of Applied Econometrics, 21(6):703–725, 2006.
Jacob A Mincer and Victor Zarnowitz. The evaluation of economic forecasts. In Economic forecasts and expectations: Analysis of forecasting behavior and performance, pages 3–46. NBER, 1969.
Sendhil Mullainathan and Jann Spiess. Machine learning: an applied econometric approach. Journal of Economic Perspectives, 31(2):87–106, 2017.
Adrian Pagan. Report on modelling and forecasting at the bank of england/bank’s response to the pagan report. Bank of England. Quarterly Bulletin, 43(1):60, 2003.
Lant Pritchett and Lawrence H Summers. Asiaphoria meets regression to the mean. Technical report, National Bureau of Economic Research, 2014.
Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
A Performance of Null model and WEO forecasts over time
B The LASSO estimator
Formally, we solve
where c is a positive constant parameter.20 If c is large, then
I choose the parameter c (or, equivalently, the Lagrange multiplier λ) to find the optimal cross-validated fit. Figure 13 plots the cross-validated fit for different values of the Lagrange multiplier A. The numbers above the chart indicate the number of variables with non-zero coefficients. For low values of λ, there is a marginal improvement in MSE for any increase in λ, as the model is overfitting the data. For high values of λ, the model is underfitting and a further tightening of the constraint leads to a worse out-of-sample fit. Hence, Figure 13 is another illustration of the bias-variance trade-off. The left vertical line indicates the parameter for which the average cross-validated MSE is the lowest. In practice, however, it is common to use a more conservative constraint such that the average cross-validated MSE is equal to the minimum average MSE plus one standard deviation (i.e., at the second vertical line). Given that the WEO forecast has an MSE of 330, it is easy to see that the LASSO is substantially superior in performance.