What do we know and what do we still need to know? The IMF’s seminar on Statistical Capacity Building Indicators asked the managers of data-producing agencies from some twenty developing countries to review progress to date and help shape the course of the exercise. Opening the two-day exchange, Carol Carson, head of the IMF’s Statistics Department, stressed “the time is ripe to look seriously at the question of statistical capacity, statistical capacity building, and statistical capacity building indicators.”
Recent global summits have pointed to the crucial role that effective institutions can play in the development process, and converging trends have increasingly emphasized the need for good statistics. Carson cited, among other trends, the new “evidence-based” approach to poverty reduction and the greater transparency required by an international financial structure that places a premium on timely and accurate information sharing. The case is now being made for greater resources for improved statistics and revitalized attention to statistical capacity. But these developments have also prompted a realization that more needs to be known about what statistical capacity is, how needs can be determined, and how progress can be measured.
Why statistical indicators matter
In both international and national statistical communities, the issue of indicators has become more topical. Why? First, there are ever more pressing calls for accountability for technical assistance. Donors want measurable results, and national authorities want to know whether the results warrant the use of their own resources. And everyone wants to know what lessons have been learned.
Second, with globalization, statistics have assumed an increasingly international dimension, and national statistics are taking on the features of an international public good. Third, national authorities will soon reach a critical point at which they must begin reducing their reliance on external aid and increasingly sustain their statistical capacity with domestic resources. It is imperative that countries prepare for this transition.
In response to a growing interest in and need for a “culture” in which policymaking and monitoring rely on hard data, a global consortium of policymakers, statisticians, and other users of statistical data formed Paris21 (PARtnership in statistics for development in the 21st century) in November 1999. Under the aegis of Paris21, the Task Team on Statistical Capacity Building Indicators was established in May 2001. Chaired by the IMF and with representatives from the World Bank, the UN Statistical Division, the UN Economic Commission for Latin America and the Caribbean, the UN Economic Commission for Europe, and Afristat (an organization in Africa involved in the statistical development of 17 member states), this team is the first systematic attempt to develop indicators of statistical capacity building that can be applied internationally.
Developing these indicators is neither a quick nor a simple task. A long and cumbersome statistical process necessarily precedes data dissemination. The phenomena that need to be measured must be demarcated; the target population has to be identified; samples and questionnaires need to be designed; and data must be collected, evaluated, and edited. Adding to the complexity are the large number of agencies involved and the widely varying scope and quality of their data products.
Given this complexity, the team devised a strategy that featured both systematic and consultative approaches. The first step entailed adopting a frame of reference that could capture the full statistical system in all of its complexity. The IMF’s six-part Data Quality Assessment Framework was selected. It provides for a set of institutional prerequisites and five essential dimensions: integrity, methodological soundness, accuracy and reliability, serviceability, and accessibility.
The second step describes the full gamut of statistical operations according to this structure, and the third step derives indicators from this frame of reference. The fourth step—concurrent with the third—entails extensive consultation with both donors and country participants on the scope and nature of these indicators. The IMF-sponsored seminar is part of this consultative process.
At the end of this process, there will be a set of indi-cators—typically one or two pages long—that provides a snapshot of a country’s statistical capacity. It is anticipated that the indicators will include two broad types:
Quantitative indicators will provide for an overall description of selected aspects of the statistical system, including the numbers of staff, surveys, and publications in broad areas such as economic data, population, education, poverty, and health.
Qualitative indicators will focus on a few representative data in broad areas—such as GDP for the economic area, measuring, for instance, the extent to which international methodological guidelines are followed in the production of these data.
What did seminar participants have to say about the framework and the preliminary work on indicators? According to the country participants, the seminar proved particularly useful in familiarizing them with the framework from which the indicators are derived. Equally important, there were significant areas of agreement, and the seminar provided country participants with an opportunity to give direction on how the indicators should be presented and pose issues that the task team will have to consider further.
Participants endorsed the systematic structure within which indicators would be presented and stressed the need to limit the number of indicators and ensure that they are simple to interpret and apply. Participants also agreed that although these indicators are being developed for managers ofdata-producing agencies to help them delineate their needs, this should not preclude their use by third parties, if and when required.
Many of the issues still to be clarified centered on how to develop benchmarks for the indicators. For instance, the indicator for the accuracy and reliability of data could be that adequate source data are exploited. For this indicator’s benchmark, it will also be important to define degrees of adequacy of source data. This may mean determining whether, at one extreme, the source data are well managed (that is, for instance, that survey coverage frames are kept up to date) or poorly managed (for example, the frame is out of date or totally nonexistent). For timeliness and periodicity, a participant suggested that the benchmark rating could be based on the IMF’s General Data Dissemination System, which “is very useful for measuring statistical capacity building.”
Once the findings of the seminar are reviewed, an amended version of the indicators will be devised and tested over the next month or so in a small number of countries and amended further as experience warrants. The immediate goal will be to submit a first draft of the indicators to the Paris21 Steering Committee in mid-June and to have a final version available for its October annual meeting.
Countries would then, over the short term, have a bird’s-eye view of their statistical capacity. Over the longer term, it is hoped that the development of effective statistical systems whose products are relevant to national needs will spur the country authorities to increasingly provide the resources needed to sustain these systems.