Mr. Paul A Austin, Mr. Marco Marini, Alberto Sanchez, Chima Simpson-Bell, and James Tebrake
As the pandemic heigthened policymakers’ demand for more frequent and timely indicators to assess economic activities, traditional data collection and compilation methods to produce official indicators are falling short—triggering stronger interest in real time data to provide early signals of turning points in economic activity. In this paper, we examine how data extracted from the Google Places API and Google Trends can be used to develop high frequency indicators aligned to the statistical concepts, classifications, and definitions used in producing official measures. The approach is illustrated by use of Google data-derived indicators that predict well the GDP trajectories of selected countries during the early stage of COVID-19. To this end, we developed a methodological toolkit for national compilers interested in using Google data to enhance the timeliness and frequency of economic indicators.
It has been two years since the trade tensions erupted and not only captured policymakers’ but also the research community’s attention. Research has quickly zoomed in on understanding trade war rhetoric, tariff implementation, and economic impacts. The first article in the December 2019 issue sheds light on the consequences of the recent trade barriers.
Mr. Serkan Arslanalp, Mr. Marco Marini, and Ms. Patrizia Tumbarello
Vessel traffic data based on the Automatic Identification System (AIS) is a big data source for nowcasting trade activity in real time. Using Malta as a benchmark, we develop indicators of trade and maritime activity based on AIS-based port calls. We test the quality of these indicators by comparing them with official statistics on trade and maritime statistics. If the challenges associated with port call data are overcome through appropriate filtering techniques, we show that these emerging “big data” on vessel traffic could allow statistical agencies to complement existing data sources on trade and introduce new statistics that are more timely (real time), offering an innovative way to measure trade activity. That, in turn, could facilitate faster detection of turning points in economic activity. The approach could be extended to create a real-time worldwide indicator of global trade activity.
Sandile Hlatshwayo, Anne Oeking, Mr. Manuk Ghazanchyan, David Corvino, Ananya Shukla, and Mr. Lamin Y Leigh
Corruption is macro-relevant for many countries, but is often hidden, making measurement of it—and its effects—inherently difficult. Existing indicators suffer from several weaknesses, including a lack of time variation due to the sticky nature of perception-based measures, reliance on a limited pool of experts, and an inability to distinguish between corruption and institutional capacity gaps. This paper attempts to address these limitations by leveraging news media coverage of corruption. We contribute to the literature by constructing the first big data, cross-country news flow indices of corruption (NIC) and anti-corruption (anti-NIC) by running country-specific search algorithms over more than 665 million international news articles. These indices correlate well with existing measures of corruption but offer additional richness in their time-series variation. Drawing on theory from the corporate finance and behavioral economics literature, we also test to what extent news about corruption and anti-corruption efforts affects economic agents’ assessments of corruption and, in turn, economic outcomes. We find that NIC shocks appear to negatively impact both financial (e.g., stock market returns and yield spreads) and real variables (e.g., growth), albeit with some country heterogeneity. On average, NIC shocks lower real per capita GDP growth by 3 percentage points over a two-year period, illustrating persistence in the effect of such shocks. Conversely, there is suggestive evidence that anti-NIC efforts appear to have a sustained positive macro impact only when paired with meaningful institutional strengthening, proxied by capacity development efforts.
IMF Research Perspective (formerly published as IMF Research Bulletin) is a new, redesigned online newsletter covering updates on IMF research. In the inaugural issue of the newsletter, Hites Ahir interviews Valeria Cerra; and they discuss the economic environment 10 years after the global financial crisis. Research Summaries cover the rise of populism; economic reform; labor and technology; big data; and the relationship between happiness and productivity. Sweta C. Saxena was the guest editor for this inaugural issue.
International Monetary Fund. Strategy, Policy, & and Review Department
"The first data and statistics strategy for the Fund comes at a critical time. A fast-changing data landscape, new data needs for evolving surveillance priorities, and persisting data weaknesses across the membership pose challenges and opportunities for the Fund and its members. The challenges emerging from the digital revolution include an unprecedented amount of new data and measurement questions on growth, productivity, inflation, and welfare. Newly available granular and high-frequency (big) data offer the potential for more timely detection of vulnerabilities. In the wake of the crisis, Fund surveillance requires greater cross-country data comparability; staff and authorities face the complexity of integrating new data sources and closing data gaps, while working to address the weaknesses noted by the IEO Report (Behind the Scenes with Data at the IMF) in 2016.
The overarching strategy is to move toward an ecosystem of data and statistics that enables the Fund and its members to better meet the evolving data needs in a digital world. It integrates Fund-wide work streams on data provision to the Fund for surveillance purposes, international statistical standards, capacity development, and data management under a common institutional objective. It seeks seamless access and sharing of data within the Fund, enabling cloud-based data dissemination to support data provision by member countries (e.g., the “global data commons”), closing data gaps with new sources including Big Data, and improving assessments of data adequacy for surveillance to help better prioritize capacity development. The Fund also will work with policymakers to understand the implications of the digital economy and digital data for the macroeconomic statistics, including new measures of welfare beyond GDP."
This paper presents a comparative analysis of the macroeconomic adjustment in Chile,
Colombia, and Peru to commodity terms-of-trade shocks. The study is done in two steps:
(i) an analysis of the impulse responses of key macroeconomic variables to terms-of-trade
shocks and (ii) an event study of the adjustment to the recent decline in commodity prices.
The experiences of these countries highlight the importance of flexible exchange rates to
help with the adjustment to lower commodity prices, and staying vigilant in addressing
depreciation pressures on inflation through tightening monetary policies. On the fiscal front,
evidence shows that greater fiscal space, like in Chile and Peru, gives more room for
accommodating terms-of-trade shocks.
Cornelia Hammer, Ms. Diane C Kostroch, and Mr. Gabriel Quiros-Romero
Big data are part of a paradigm shift that is significantly transforming statistical agencies, processes, and data analysis. While administrative and satellite data are already well established, the statistical community is now experimenting with structured and unstructured human-sourced, process-mediated, and machine-generated big data. The proposed SDN sets out a typology of big data for statistics and highlights that opportunities to exploit big data for official statistics will vary across countries and statistical domains. To illustrate the former, examples from a diverse set of countries are presented. To provide a balanced assessment on big data, the proposed SDN also discusses the key challenges that come with proprietary data from the private sector with regard to accessibility, representativeness, and sustainability. It concludes by discussing the implications for the statistical community going forward.