Allen, R., S. Schiavo-Campo, and T.C. Garrity, 2002, “Assessing and Reforming Public Management: A New Approach,” (Washington: World Bank).
Boyne, G. and J. Law, 1991, “Accountability and Local Authority Annual Reports: The Case of the Welsh District Councils,” Financial Accountability and Management, Vol. 7 No. 4, pp. 179–194.
Diamond, J., 1990, “Measuring Efficiency in Government: Techniques and Experience,” in Government Financial Management: Issues and Country Studies, ed. by A. Premchand (Washington: International Monetary Fund).
Diamond, J., 2003, “From Program to Performance Budgeting: The Challenge for Emerging Market Economies,” IMF Working Paper 03/169, (Washington: International Monetary Fund).
Diamond, J. and P. Khemani, 2004, “Introducing Financial Management Information Systems in Developing Countries,” IMF Working Paper (forthcoming) (Washington: International Monetary Fund).
Gearhart, J., 1999, “Activity Based Management and Performance Measurement Systems,” Government Finance Review, (February). pp. 13–16.
Henderson, S., 2004, “The Challenges of Measuring Performance,” paper presented at the OECD Senior Budget Officials Meeting, “Performance and Information in the Budget Process,” Paris, April.
Hill, A., 2004, “The Use of Performance Targets: UK Experience,” paper presented at the OECD Senior Budget Officials Meeting, “Performance and Information in the Budget Process,” Paris, April.
Hyndman, N.S. and R. Anderson, 1995, “The Use of Performance Information in External Reporting: An Empirical Study of U.K. Executive Agencies,” Financial Accountability and Management, Vol. 11 No. 1 (February), pp. 1–17.
Joyce, P.C., 1993, “Using Performance Measures for Federal Budgeting: Proposals and Prospects,” Public Budgeting and Finance, Vol. 13, pp. 1–15.
Kristensen, O.K., W. Groszyk, and B. Buhler, 2001,“Outcome-focused Management and Budgeting,” OECD Journal on Budgeting, Vol. 1 No. 4.
Maholland, L. and P. Muetz, 2002, “A Balanced Scorecard Approach to Performance Measurement,” Government Finance Review, (April), pp. 12–15.
OECD, 1994, “Performance Management in Government: Performance Measurement and Results-Oriented Management,” Public Management Occasional Papers, No. 3 (Paris: OECD).
Osborne, S.P., T. Bovaird, S. Martin, M. Tricker, and P. Waterston, 1995, “Performance Management and Accountability in Complex Public Programs,” Financial Accountability and Management, Vol. 11, No. 1 (February), pp. 19–37.
Pendlebury, M.R., R. Jones and Y. Karbhari, 1994, “Developments in the Accountability and Financial Reporting Practices of Executive Agencies,” Financial Accountability and Management, Vol. 10, No. 1, pp. 33–46.
Perrin, B., 1998, “Effective use and misuse of performance measurement,” American Journal of Evaluation, Vol. 19, No. 3, pp. 367–379.
Wholey, J.S., 1999, “Quality Control: Assessing the Accuracy and Usefulness of Performance Measurement Systems,” ch. 13 in Hatry, pp. 217–239.
Willoughby, K. and J.E. Melkers, “Assessing the Impact of Performance Budgeting: A Survey of American States,” Government Finance Review, Vol. 17, pp. 25–30.
A first draft of this paper was prepared for the CARTAC Workshop on Program and Performance Budgeting, June 15–18, 2004, Barbados. It has been subsequently revised with the assistance of Mr. James Brumby, and the comments of other colleagues in the Fiscal Affairs Department of the IMF. The helpful comments of Mr. Justine Rodriguez, of the U.S. Office of Management and Budget, are gratefully acknowledged. The usual disclaimers apply.
Many countries have budgets with program structures, but moving from this to performance budgeting focused on outputs and outcomes and linking them with inputs is not easy, as discussed more fully in Diamond, 2003.
Osborne and Gaebler, 1992, “Organizations that measure the results of their work…find that the information transforms them,” p. 63; see also Kristensen, Groszyk, and Buhler, 2000.
Not surprisingly, perhaps, high-performance organizations have often been found to have clear goals and a strong focus.
The use of a simple production model to characterize the operation of a government unit is usually regarded as the foundation for assessing performance, and has a long tradition, e.g., Brace et al, 1980; Jackson and Palmer, 1989; Boyne and Law, 1991; Hyndman and Anderson, 1995. However, others have questioned its adequacy, especially for complex public programs; see Osborne, Bovaird, Martin, Tricker, and Waterston, 1995. Others have been more critical of the fundamental production model, for example stressing the differential access to information of ruling groups, hence using “political models of performance assessment, Pollitt, 1993, or organizational models, such as Kotter and Heskett, 1992.
Where the ratio of input to output defines efficiency and the reciprocal ratio of output to input defines productivity.
From this brief review of the main concepts and issues in the performance literature, it is perhaps possible to find sympathy for the conclusion “that the notion of performance—often bereft of normative standards, invariably full of ambiguity—is, in theory and practice, both contestable and complex,” Carter et al, 1991, p. 50.
For a full description of the performance management framework and its specific terminology under the Government Performance Results Act (GPRA, see Groszyk, 2001.
See Kristensen, Groszyk, Buhler, 2001, p. 1. At the same time, several writers (notably Pollitt, 1993) suggest the emphasis on outcomes over outputs (and the broader emphasis on economy and efficiency rather than effectiveness) may reflect the political interest of a government that is primarily concerned with cost-cutting rather than performance evaluation.
“Australia, Netherlands, and New Zealand began by concentrating on outputs and are now moving to an outcomes approach. Australia and the Netherlands are changing their accounting and budgeting systems to focus on outcomes. France recently passed a law, which requires the production of outputs and outcomes in budget documentation for the majority of programmes,” OECD, 2004, p. 7. It should be noted that while Australian states began with a focus on output, the federal government specifically rejected output budgeting, preferring to move directly to outcomes.
Henderson, 2004, contrasts a straightforward target such “reduce substantially the mortality rates by 2010 from heart disease by at least 40 percent in people under 75; from cancer by at least 20 percent in people under 75…,” (Department of Health) with, for example, “Improve effectiveness of the U.K. contribution to conflict prevention and management as demonstrated by a reduction in the number of people whose lives are affected by violent conflict and a reduction in potential sources of future conflict…” (FCO, Ministry of Defence, DIFID).
Performance measurement is even applicable to internal support services, but the outcomes from internal support occur within the organization, and it is impossible to estimate the impact these services have on outcomes of external services.
The problems of the measurement and allocation of costs is necessarily limited when government units typically use cash accounting principles. In the United Kingdom, one of the problems encountered in enforcing accountability in the Next Steps Agencies, created in the late 1980s and early 1990s, was the lack of information on unit costs. This arose from them not having commercial style accounts as well as because of the undeveloped nature of their costing systems. A related problem was that typically the measure used as the cost object was rarely the ultimate result, but an intermediate output measure related to activity. A full discussion is contained in Pendlebury et al, 1994.
It has also been argued that the increasing complexity of government operations has significantly contributed to the problem. Traditional cost accounting is adequate when processes were simple. However, as technology has advanced in scale and complexity with major investments the cost profile of government organizations has been significantly complicated. Costs that traditionally had been considered overheads now represent activities critical to the delivery of government services. It is increasingly difficult to associate these costs directly to individual programs or customers. See Gearhart, 1999.
“Performance indicators are no substitute for the independent, in-depth qualitative examination of the impact of policies which evaluations can provide,” OECD, 2004, p. 15. Due to the costs involved, these necessarily have to be used sparing and guided by some cost-benefit principle.
As a consequence, it is often recommended that any ongoing monitoring of performance be supplemented with periodic program evaluation or reviews. That is, “an effective performance information system will include an appropriate mix of ongoing performance information and periodic evaluations,” Australian Department of Finance, 1996, p. 3. The latter allows a wider range of information and stakeholder perceptions. From the U.S. perspective, see Joyce, 1993, p. 10.
The balanced scorecard looks at a wide range of measures, including difficult to measure factors such as the company’s focus on innovation and learning. For example, Osborne et al, 1995, would include as an important dimension of performance in social programs indicators of the process of “learning lessons” and “empowerment” of communities, p. 31 ff.
Perhaps not surprisingly, it is often found that the development of performance indicators has been fastest in the least problematic areas where government units have clearly defined functions, and that the problems of measuring performance increase with the complexity of government activities; see Carter et al, 1991.
The PEM literature is replete with examples of poor or dysfunctional performance measures: a tax office measuring the cost per application of revenue collected might encourage the office to leave difficult cases aside; a hospital using “cost per occupied patient bed” could encourage managers to retain patients to ensure no beds are unoccupied; rating a school’s performance based on examination success rates might lead to schools discouraging low performers from joining the school, etc.
“Performance measurement should aim at developing a limited number of well-chosen, stable measures, so that a track record of an organization’s performance over time can be built up. This does not mean that performance measures are defined once and for all; they may need to be modified to take into account changes in the managerial context and the environment in which the organization exists.” OECD, 1994, p. 14.
“There are limits on how much information decision-makers can make good use of: people have ‘bounded rationality’ and so do organizations,” OECD, 2004, p. 4.
This issue of quality control in performance measurement is dealt with more fully in Wholey, 1999.
See the interesting discussion on the importance of defining objectives as one of the main obstacles found when measuring performance of U.S. federal agencies, CBO, 1993.
A discussion of the development of an information system to support this central first step is discussed further in Diamond and Khemani, 2004.
During this stage, ministries could also use these performance measures to determine the allocation of up to 10 percent of resourcing between service areas. It may also be useful to recommend that the percentage of resource allocation subject to satisfactory performance of managers be increased progressively by at least 5 percent each year. This will provide a clear message to managers that performance information is important and relevant in deciding on the allocation of budget resources.
The Public Expenditure and Financial Accountability (PEFA) program is a joint program of the World Bank, European Commission, IMF, development agencies from France, Norway, Switzerland and the United Kingdom, and the Strategic Partnership with Africa (SPA). The program was established in December 2001; its secretariat is based in the World Bank.
While the approach concentrates on a standard set of indicators considered relevant to all country circumstances, it is recognized that a country may add further indicators to meet specific needs.