Toward a Framework for Assessing Data Quality
Author: Carol S. Carson

Contributor Notes

Author’s E-Mail Address: CCarson@ imf.org

This paper describes work in progress on data quality, an important element of greater transparency in economic policy and financial stability. Data quality is being dealt with systematically by the IMF through the development of data quality assessment frameworks complementing the IMF’s Special Data Dissemination Standard (SDDS) and General Data Dissemination System (GDDS). The aim is to improve the quality of data provided by countries to the IMF; and to assess evenhandedly the quality of countries’ data in Reports on the Observance of Standards and Codes. The frameworks bring together best practices including those of the United Nations Fundamental Principles of Official Statistics.

Abstract

This paper describes work in progress on data quality, an important element of greater transparency in economic policy and financial stability. Data quality is being dealt with systematically by the IMF through the development of data quality assessment frameworks complementing the IMF’s Special Data Dissemination Standard (SDDS) and General Data Dissemination System (GDDS). The aim is to improve the quality of data provided by countries to the IMF; and to assess evenhandedly the quality of countries’ data in Reports on the Observance of Standards and Codes. The frameworks bring together best practices including those of the United Nations Fundamental Principles of Official Statistics.

I. Introduction

Work toward a framework for assessing the quality of data has been underway in the IMF’s Statistics Department for some time, but the project has been pursued with special intensity over the last year. The work responds to a number of needs, in particular, to complement the quality dimension of the IMF’s Special Data Dissemination Standard (SDDS) and General Data Dissemination System (GDDS), to focus more closely on the quality of the data provided by countries to the IMF that underpin the institution’s surveillance of their economic policies, and to assess evenhandedly the quality of the information provided as background for the IMF’s Reports on the Observance of Standards and Codes.

The purpose of this paper is to describe the IMF’s work in progress on data quality and to stimulate further discussion of the draft quality assessment framework that has been produced. The Statistics Department has sought feedback at all stages of the development of the framework. Thus, the generic framework and the specific frameworks for individual data categories are the product of a process that has been iterative and consultative but is far from finished.

The paper is organized in three sections following this Introduction. Section II discusses the stimuli that prompted the work on data quality and explains the two-pronged approach that was taken to the work. Section III describes the data quality framework that emerged from this approach. Building on the growing literature on data quality, the Statistics Department’s practical experience, and feedback from several rounds of consultations, the section presents a generic framework for assessing data quality that synthesizes elements covering quality of the institution—the superstructure for producing and disseminating statistics—and quality of the individual statistical product. This section also discusses the work in progress on the dataset-specific frameworks, provides some approaches to “lite” assessment tools and summary results, and gives some examples of practical applications of the framework. The final section, Section IV, discusses the work ahead to refine the framework and engage others in the work. Supporting material, provided in the annexes, includes the generic framework, a specific framework for the balance of payments, an explanation of the correspondence between the quality approach embodied in the SDDS and that of the data quality assessment framework, and some examples of summaries of assessments.

II. Background to the Framework2

A. The Stimuli

Statistics have been recognized as playing a key role in the work of the IMF from the organization’s beginning. The provision of data to the Fund by member countries is rooted in its Articles of Agreement, in which, under the heading of General Obligations of Members, the basic principles are set forth. Discussion by the Executive Board of the IMF in 1946 led to systematic collection of data and their monthly dissemination through International Financial Statistics (IFS). From this base, the IMF’s statistical activity has developed over the years in response to the needs of the IMF and its members. Within that general context, there are three main stimuli for the Statistics Department’s recent work on data quality.

The first stimulus centers around the SDDS and the GDDS, established in 1996 and 1997, respectively, to provide guidance to countries on the provision of data to the public. The SDDS identifies best practices in the dissemination of economic and financial data in four areas—the so-called four dimensions: data (coverage, periodicity, and timeliness); public access to the data; integrity of the data; and—last but not least—data quality. Two points about the treatment of data quality in the SDDS may be noted:

  • The first three dimensions dealt with several desirable characteristics of data—for example, timeliness and integrity. The quality dimension, then, implicitly refers to other desirable characteristics—accuracy, adherence to international statistical guidelines, and consistency, etc.

  • The quality dimension calls for the provision of information that would facilitate data users’ assessment of these characteristics according to their own needs through the use of monitorable proxies for quality. Specifically, the quality dimension calls for dissemination of, first, methodological statements (covering the analytical framework, concepts and definitions, accounting conventions, nature of basic data, and compilation practices) and, second, information that permits cross checks for reasonableness.

The GDDS focuses explicitly, given the wider range of countries for which it is intended, on encouraging countries to improve data quality and helping them evaluate needs for data improvement. It is built around the same four dimensions as the SDDS, but with a difference. The data and quality dimensions are organized around statistical products, and the access and integrity dimensions are organized around the agencies preparing the statistical products. The GDDS focuses on improving data on two fronts—both the data product directly and via strengthening the producing agencies.

After the launch of the SDDS and GDDS, questions about data quality took on an even higher profile, especially in the setting of increased access to data on the Internet that is, indeed, partly attributable to the SDDS. One question was: What assistance can be provided to data users, including those in financial markets, to help them evaluate the quality of the data available to them? More broadly: Is there a way to focus more attention on data quality issues, especially in light of the perceived interests beyond national boundaries? How can national statistical authorities be assisted in assessing the quality of their data, and what incentives can be provided to encourage cost-effective improvements? Several of these points and variants of them were raised at the United Nations Statistical Commission in 1999 and discussed further in 2000. In effect, these points were a challenge to supplement the SDDS and the GDDS to make the link with data quality more active.

The second stimulus had its origin in the Mexican financial crisis of 1994-95. Not only did this crisis focus attention on the need for countries to disseminate data to the public (and lead to the SDDS and GDDS), but it also highlighted the need for countries to provide data to the IMF to support it in meeting its responsibilities for surveillance of members’ economic policies. In a series of discussions beginning in 1995, the IMF’s Executive Board noted that it was imperative for the IMF, as well as for member countries, to improve the quality of data.3 A summary of the Executive Board’s most recent discussion of data provided by IMF members, including encouragement of the staff’s work on a framework for the assessment of data quality, is available on the IMF’s Website.4

More recently, the need for work on data quality has been given further impetus by a number of high-profile cases of misreporting of economic data by countries to the IMF in the context of IMF loan programs. A framework within which to assess data quality was seen as an important, and heretofore missing, tool that might be used to strengthen the data that underpin decisions to disburse IMF loans.

The third stimulus traces to the more recent financial crises in Asia, Russia, and elsewhere. In the wake of these crises, there has been widespread agreement that the adoption of internationally accepted standards, or codes of good practice, can make an important contribution to the efficiency of markets and a strengthening of the international financial system. The IMF is responding to the request by the international community that it prepare, as part of its mandate to conduct surveillance of its member countries’ economic policies, a report “that summarizes the degree to which an economy meets internationally recognized disclosure standards.”5

For data dissemination, the SDDS and the GDDS were identified as the relevant standards for these experimental assessments—Reports on the Observance of Standards and Codes, or ROSCs. Each report comprises two elements: a description of country practices, primarily in the core areas that have a direct impact on the IMF’s work, and an independent commentary by IMF staff on the extent to which these practices are consistent with the standard being assessed. Data dissemination has been included in reports for over a dozen countries thus far.6 The earlier reports focused on the disclosure elements of the international standards—that is, the requirements to make information available to the public. The later reports also consider the quality of the information disclosed, reflecting the experience that the reports that only dealt with the disclosure aspects of the standards were not totally satisfying. Specifically, it was noted that the reports would be more useful if they dealt with, inter alia, the quality of the information provided.

B. A Two-Pronged Approach

All three stimuli pointed to the need for more work on data quality. As well, all three stimuli pointed to the usefulness of undertaking the work in the widest possible consultation with others.7 A two-pronged approach was undertaken, leading to an Internet site and a framework within which to assess data quality.

To start, attention would need to be given to the definition of data quality. It has been pointed out that quality in statistics, years ago, might have been synonymous with accuracy, but today a consensus is emerging that quality is a much wider, multidimensional concept8. However, no internationally agreed definition of data quality exists.9 To further a common understanding of data quality, the IMF undertook to host a Data Quality Reference Site on the Internet.10

Further, one clear, practical need was for more structure and a common language for assessing data quality. Such an assessment tool could serve to complement the SDDS and GDDS, to guide IMF staff in assessing whether national data are adequate for surveillance and in designing technical assistance, and to guide IMF staff (and others) in assessing and reporting on the observance of standards and codes.

Given these three interrelated purposes, it seemed that an assessment tool to provide more structure and a common language would need to have the following characteristics:

  • Comprehensive in coverage of the dimensions of quality and of elements (indicators) that might represent quality,

  • Balanced between the rigor desired by an expert and the bird’s-eye view desired by a general data user,

  • Structured but flexible enough to be applicable across a broad range of stages of statistical development,

  • Structured but flexible enough to be applicable (at least) to the major macroeconomic datasets,

  • Lead to transparent results, and

  • Arrived at by drawing on best practices of national statisticians.

III. The Emerging Framework

A. The Generic Quality Framework

Taking off from these main characteristics, the data quality assessment framework that is emerging reflects the growing literature on the subject, the Statistics Department’s practical experience in dealing with the statistical systems of both developed and developing countries, and the feedback from several rounds of consultations with national compilers of statistics, international organizations, and others, as well as some experimental field-testing by IMF staff.

The framework that is emerging comprises a generic assessment framework and specific assessment frameworks for the main aggregates used for macroeconomic analysis. The generic framework, which brings together the internationally accepted core principles/standards/or practices for official statistics, serves as the umbrella under which the dataset-specific quality assessment frameworks are developed. It is shown in Annex I and reflects feedback received as of end-October 2000.

The framework follows a cascading structure that flows from five main dimensions that have been identified as critical constituents of data quality. For each of these interrelated, and somewhat overlapping, dimensions, the framework identifies pointers, or observable features, that can be used in assessing quality. These pointers to quality are broken down into elements (major identifiers of the quality dimension) and further, into more detailed and concrete indicators. Below the indicator level, especially in the dimensions dealing with methodological soundness and with accuracy and reliability, the specific frameworks tailor these pointers to the individual datasets.

The five dimensions of quality are as follows:

  • Integrity. This dimension is intended to capture the notion that statistical systems should be based on firm adherence to the principle of objectivity in the collection, compilation, and dissemination of statistics. The dimension encompasses the institutional foundations that are in place to ensure professionalism in statistical policies and practices, transparency, and ethical standards.

  • Methodological soundness. This dimension of quality covers the idea that the methodological basis for the production of statistics should be sound and that this can be attained by following international standards, guidelines, and agreed practices. In application, this dimension will necessarily be dataset-specific, reflecting differing methodologies for different datasets (for example, the 1993 SNA for national accounts and the fifth edition of the Fund’s Balance of Payments Manual for balance of payments).

  • Accuracy and reliability. For most users, accuracy and reliability are among the most sought-after attributes of data. We are all concerned that the data we use sufficiently portray reality at all stages of dissemination—from “flash” to “final” estimates. Thus, this dimension relates to the notion that source data and compilation techniques must be sound if data are to meet users’ needs.

  • Serviceability. Another area of concern for users is whether the data that are produced and disseminated are actually useful. This dimension of quality relates to the need to ensure that data are produced and disseminated in a timely fashion, with an appropriate periodicity, provide relevant information on the subject field, are consistent internally and with other related datasets, and follow a predictable revisions policy.

  • Accessibility. Users want understandable, clearly presented data and need to know how data are put together as well as be able to count on prompt and knowledgeable support from data producers for their questions. Thus, this quality dimension relates to the need to ensure that clear data and metadata are easily available, and that assistance to users of data is adequate.

The framework recognizes that the quality of an individual dataset is intrinsically bound together with that of the institution producing it. In other words, quality encompasses quality of the institution or system behind the production of the data as well as the quality of the individual data product. In this sense, it is rooted both in the overarching, systemic approach seen in the United Nation’s Fundamental Principles of Official Statistics and the more traditional quality of the product approach. The cross-cutting relationship between the quality dimensions in the quality framework and the combined quality-of-the-institution and quality-of-the-product approach can be seen in Box 1, below.

Taking off from this approach, the framework also includes a few elements and indicators that, although not constituting a quality dimension in themselves, have an overarching role as prerequisites, or institutional preconditions, for quality. They appear as a zero category in the first row of the data quality assessment framework in Annex I. These pointers to quality cover issues such as whether a supportive legal and administrative framework is in place, whether resources are commensurate with the needs of statistical programs, and whether quality is recognized as a cornerstone of statistical work by producers of official statistics.

Against this background, the framework attempts to meet the substantive characteristics laid out in paragraph 15 above:

  • Comprehensive. The framework encompasses quality-of-the-institution and quality-of-the-product approaches, as discussed above. The framework’s comprehensiveness helps ensure that all relevant elements are assessed. For example, a less comprehensive approach—for instance, one heavily weighted toward quality of the product—would not bring to the surface problems of inter-agency cooperation that are often found in less advanced statistical systems. However, in countries with highly advanced statistical systems, the institutional dimensions of quality may be taken for granted to a large extent, with the focus falling almost entirely on issues related to the quality of the product. For these reasons, the framework is not hierarchical, nor are specific weights assigned to the dimensions or the several elements/indicators in recognition that different country situations will call for different tradeoffs.

  • Balance between rigor and a bird’s-eye view. The framework is purposefully flexible as a structure for conducting an assessment and presenting the results. Depending on the level of interest and expertise, the framework can be applied in several ways. Some specific examples are provided in Section C, below.

  • Applicable across a range of country situations. As noted above, the comprehensiveness of the framework promotes its applicability to various stages of statistical development. In addition, the framework encourages use of a common language and taxonomy across countries and thus enhances the comparability of assessments.

  • Applicable across a range of datasets. The framework’s cascading approach combines a common structure with dataset-specific detail.

  • Transparent results. The framework provides a systematic and reproducible approach in that the same dimensions, elements, and indicators can be applied across a wide range of situations. The elements and indicators are designed to maximize the use of objective information.

As mentioned at the beginning of this paper, an important stimulus for the work on data quality was the challenge to complement the SDDS and the GDDS by making their link with data quality more active. As can be seen from the “crosswalk” from the SDDS to the data quality assessment framework, which is presented in Annex II, the quality framework encompasses all of the quality indicators embedded in the SDDS, but complements and adds to them. In this sense, the data quality framework can be seen as an evolution of the approach to data quality developed for the SDDS in 1996, such that monitorable proxies for quality (the SDDS) have been complemented by observable features of quality (the data quality assessment framework).

B. The Dataset-Specific Frameworks

As the generic framework began to take shape, the Statistics Department also undertook work on several dataset-specific frameworks. The national accounts was the first of these specific frameworks to reach a stage for discussion outside the IMF.11 This framework was discussed in June 2000 at a workshop in which representatives of national statistical offices and the organizations in the Inter-Secretariat Working Group on National Accounts participated.

Over the summer of 2000, other specific frameworks were developed for the balance of payments, the analytical accounts of the central bank, the producer price index, and government finance statistics. These specific frameworks also have been subjected to an intensive consultative process with the objective of having a round of comments on all five frameworks by the end of 2000. For example, the framework for the analytical accounts of the central bank was commented on by representatives of the Working Group on Money and Banking Statistics and members of Statistics Committee of the European Central Bank in September and October 2000, respectively. Extensive comments on the balance of payments framework were provided by the members of the IMF Balance of Payments Statistics Committee, and the framework was discussed during a full-day session of the annual meeting of the Committee in late-October 2000. The draft balance of payments framework, as revised after the meeting of the Committee, appears in Annex III (to be provided).

In addition, Statistics Department staff have begun to use the specific frameworks on an experimental basis in field work, particularly for diagnostic missions to countries that we are less familiar with, to assist countries to prepare GDDS metadata and to prepare the quality assessment summary of the ROSCs. The Statistics Department has also sought informal feedback from other IMF staff who are involved in day-to-day operational work with member countries.

The comments that have been received, on both generic and the-specific frameworks, have been encouraging. In general, those commenting saw the development of the frameworks as a welcome initiative that filled an important gap in the work on data quality. Most commentators saw the frameworks as a careful, thoughtful approach to the issue of assessing data quality that provided the basis for a coherent and practical way forward in a field that is conceptually and practically complex. They welcomed the frameworks’ close mapping to existing statistical standards and manuals, and encouraged the Statistics Department to expand the range of datasets covered. Commentators, including those whose organizations provide technical assistance in statistics, encouraged further field tests to gain practical experience.

Commentators had a number of other suggestions, which can be summarized as follows.

  • Clarify how the framework would be used—in what circumstances could the framework be used, who could do the assessment, who was the intended audience, and would publication of the results be expected? Some commentators wondered whether the frameworks would be manageable for small countries. Resource costs of completing the assessments should be taken into account and weighed against potential benefits.

  • Consider a diagnostic tool to point toward (or not) the need for an assessment using the full framework. Show how the careful, systematic full framework can yield summaries at a level of interest to nonstatisticians.

  • Clarify that the ordering of the quality dimensions and the pointers within them do not presuppose prioritization of their importance.

  • Ensure that the assessment frameworks give room for flexibility to take into account individual country’s circumstances. A prescriptive, one-size-fits-all approach was discouraged.

C. Moving Forward: Responding to and Seeking Further Comment

These comments are being taken into account in preparing the revised versions of both the generic and the specific frameworks and in guiding future work. To move the discussion forward, this section takes up two interrelated comments, about summaries of assessments and “lite” versions of the framework and about possible applications of the frameworks.

“lite” versions of the framework and summaries

The dataset-specific frameworks are seen, as noted above, as a careful, thoughtful approach to assessing data quality and as providing a coherent and practical way forward in a complex undertaking. However, it is recognized that, in their full detail, they are, variously, daunting, resource intensive, and a tool designed by statisticians mainly for statisticians.

While recognizing the usefulness of the full framework but in view of the time and/or expertise that it would take to complete an assessment, questions were raised about whether a “lite” version might be identified within the full framework. There seem to be several possible variants of such a tool.

(a) Adjunct to GDDS metadata: GDDS metadata present information on integrity and access organized by institution and on data and quality (in the sense of the quality dimensions of methodological soundness and of accuracy and reliability) organized by data product. With respect to the last, the DQAF goes beyond the GDDS’s call for dissemination of relevant information to provide a structure for the assessment of methodological soundness and accuracy/reliability. Accordingly, it is here that the effort to use the data quality assessment framework might be viewed as having the highest value added. Thus, one “lite” variant, to be used in conjunction with GDDS metadata, could be to implement the full cascading structure for the dimensions of methodological soundness and of accuracy and reliability.

(b) Nonstatisticians diagnostic preview: An interested user of statistics might be expected to have access to data products (bulletins, yearbooks, etc.), at least some documentation, and basic information about the agency or unit that produces the data. One could imagine that such a person might wish to undertake a diagnostic preview assessment to determine whether a more detailed assessment was needed to explore the quality of the data for his/her particular use. A reduced set of three-digit indicators that such a person might be able to use is shown in Table 1.

Table 1.

A Nonstatistician’s Diagnostic Preview of the Generic Data Quality Assessment Framework (Draft as of end October 2000)

article image

In addition to being amenable to assessment on the basis of the reasonably accessible kind of information just mentioned, the indicators in Table 1 were selected from among those in the generic framework for their ability to serve as proxies for other indicators. For example, Table 1 lists “Source data are collected from comprehensive data collection programs that take into account country-specific conditions.” (3.1.1). If a country has been in the position to put in place a comprehensive data collection program, it might be expected that the source data reasonably approximate definitions, scope, etc., called for and are timely. Thus, by assessing one indicator, one can predict something about two others (3.1.2 and 3.1.3). Similarly, an indicator within the serviceability dimension—“Statistics are released on a pre-announced schedule.” (5.1.3)—is listed because it serves as a bellwether for other indicators related to transparency, including those in the integrity dimension. Also, the existence of two processes—to focus on quality, to monitor quality of production and dissemination, to deal with tradeoff within quality, etc., (0.3.1) and to monitor the relevance and practical utility of statistics (4.1.1)—are listed as keys to the statistical agency’s/unit’s own attention to quality.

(c) Statistician’s diagnostic preview: A statistician might have more information than assumed in “lite” variant (b). To take advantage of this information, one or two more detailed and concrete pointers might be identified for the elements identified in variant (b) within the dimensions of methodological soundness and of accuracy and reliability. For example, within methodological soundness, the application of the residency criterion, a feature of the element dealing with classification/sectorization systems (2.3.1), could be identified as key for several datasets.

These three variants of a “lite” framework are presented as a springboard for discussion. For example, which, if any, of the three is robust and true enough to the motivations of the frameworks? Are the criteria for designing variant (b) appropriate, or should, perhaps, more emphasis be placed on integrity and less on methodological soundness in line with nonstatisticians’ interests and ability to assess? What adjustments might be made to variants (b) and (c)—e.g., to make even more “lite” or better diagnostic previews.

Also, it was noted in the comments that nontechnicians such as policy advisors and readers of ROSCs would not be interested in the results at the level of the full detail. Another question that arose was how well a completed dataset-specific framework could be summarized to the level that might be of interest to these audiences. A sample of summaries, based on field tests of an early version of the draft framework but rearranged to align with the revised framework shown in this paper, is presented in Annex IV. The summaries are structured to comment on each of the five quality dimensions as well the prerequisites of quality.

Comments are being sought: are summaries such as these concrete enough? Are they of interest for nontechnicians? To what extent does explicit structure help? Hinder?

Applications of the frameworks

By far the most important area for clarification that emerged from the consultations to date relates to the possible uses and users of the frameworks. We could envision three main categories of users—national producers of official statistics, international organizations, and other data users, including those in the private sector. Some examples may help to clarify the several ways that the frameworks might be used. All these uses are built on the assumption that, after further consultation and testing, the data quality assessment frameworks are made widely available–for example, on the IMF’s Website.

  • National Statistical Office. One could envision an NSO undertaking an internal assessment using the frameworks. This assessment might be the basis for its own internal planning. Going further, if the NSO wanted to make the case with the country’s legislative body (or other allocator of resources) that it needed additional resources for, say, national accounts, it would point to the framework as an internationally accepted tool to identify needed improvements. The NSO might then well wish to make both the full assessment and a summary available to the public.

  • IMF. Within the IMF, the framework could be seen as an important tool to be used both by specialists from the Statistics Department and by general economists working on country operations. The general economist might use the diagnostic preview, such as described in the proceeding section, for a particular data category in which problems were suspected. Functioning much like the information on a person’s temperature, blood pressure, and pulse included in an annual physical, the tool might point to deeper issues that could then be referred to a statistical specialist. Within the Statistics Department, we have already begun to use the frameworks on an experimental basis in preparing ROSCs and in working with countries that wished to participate in the GDDS to prepare metadata, including their plans for improvement. They have been especially useful because they permit an even-handed approach to assessing quality across the very diverse range of countries that comprise the IMF’s membership.

  • Financial market participants and others. Financial market analysts and others—researchers, for example—may find summaries useful as a reference tool. To take one example, a financial market analyst might supplement the information provided in the data module of a ROSC with his/her own conclusions drawn from the summary for a specific dataset.

These examples are intended to illustrate the flexibility inherent in the application of the quality frameworks. Other examples are presented in Box 2, below, which identifies possible assessors, tool(s) (full assessment or preview), uses, and format (summary or full assessment) and terms of the availability of the assessment for a selection of potential users of the framework. Although the overall approach is meant to be systematic and designed to maximize the use of objective information, the results, to a certain degree, will remain subjective. This subjectivity goes hand in hand with the framework’s flexibility and reflects the diversity of its potential users and uses.

In providing feedback on the usefulness of the IMF’s Reports on the Observance of Standards and Codes, some users, particularly those in the financial markets, have called for an assessment system that would permit a country ranking or a scoring system for data quality. However, the data quality assessment framework does not lend itself to such an approach. The element of subjectivity inherent in the frameworks, the detail embedded in the dataset specific frameworks, and the great diversity of country circumstances largely preclude using them to make meaningful country rankings.

Some Applications of the Framework

article image

National agencies mandated to conduct audits of government operations could also use the frameworks.

Other international or multilateral organizations could also use the frameworks—for example, on which to build assessments of regional aggregates or in the conduct of the technical assistance.

In working through the frameworks, users should be clear that no country is likely to meet all of the best practice criteria for data quality that they embody. Moreover, countries should not be penalized if parts of the frameworks (at the indicator level or below) are not applicable, and thus no response can be given. Indeed, it is expected that the frameworks would be applied flexibly with the objective of pointing to relevant areas that may need attention so that an action plan, and the resources to carry it out, could be identified.

IV. The Work Ahead

In the coming months, the Statistics Department will continue working to refine the data quality assessment frameworks in the light of experience gained in the field and feedback from those outside the Department. Work is underway on a glossary to accompany the generic framework. One important part of the work will be to define what kind of supporting notes should accompany the frameworks, particularly the dataset-specific frameworks, and to develop those notes.

So far, five dataset-specific frameworks have been produced, and we intend to begin work on a few additional major data categories—such as the monetary accounts, the consumer price index, or merchandise trade, for example. We would welcome work in collaboration with other agencies on these macroeconomic datasets. A promising avenue may be collaboration with another organization on a quality framework for one or more sets of socio-demographic data—a category of the GDDS.

As noted throughout this paper, the frameworks are still very much a work in progress and we must answer a number of questions as we go along. Thus far, most of the feedback we have received has been from compilers of official statistics. We need to look into ways to elicit commentary on and testing of the frameworks by other groups, in particular, nonstatisticians.