Abstract

Many emerging economies have completed the first step of the budget reform process by moving away from the traditional emphasis on managers’ stewardship of public resources and on compliance within strict detailed appropriations. As described in Section III, this usually involves implementing some form of program management and budgeting, along the lines of the reforms introduced in OECD countries, where there is greater emphasis on achieving efficient and effective outputs and outcomes and where measures of performance have tended to play a key role.

Many emerging economies have completed the first step of the budget reform process by moving away from the traditional emphasis on managers’ stewardship of public resources and on compliance within strict detailed appropriations. As described in Section III, this usually involves implementing some form of program management and budgeting, along the lines of the reforms introduced in OECD countries, where there is greater emphasis on achieving efficient and effective outputs and outcomes and where measures of performance have tended to play a key role.

This section argues that, while it might be tempting for emerging economies to press forward to adopt a full-blown outputs and outcomes performance management framework, there are some risks involved. Such a change in orientation is only possible once managers have had adequate experience in refining the definition of programs and objectives and, on this basis, have experience in developing a comprehensive system of performance measurement. It is argued in this section that developing a comprehensive performance management system requires three critical steps: first, to clearly define how to measure “performance”; second, to overcome a number of technical issues in the design and use of measures of that performance; and third, to make performance information relevant for resource allocation decisions. Based on international experience, this section reviews each of these three hurdles in moving toward a performance management framework.

Different Measures of Performance

A general consensus is emerging in OECD countries that performance measurement improves the quality of management by helping officials make better decisions and use resources more effectively.37 Quantifying performance allows managers to monitor changes, identify potential problems, and take timely corrective action. Having data that describes performance in terms of where the agency has been, where it is today, and how it compares with similar agencies allows managers to direct resources to, and officials to support, activities that are successful. Performance measures also influence employee behavior. When employees know the basis on which they will be assessed, they are more likely to meet their performance goals and more likely to find their work rewarding. For government as a whole, it can be argued, performance recognition builds trust and enhances credibility with taxpayers.

In accepting the need for performance measurement, it is important not to underestimate the problems in its implementation. There is general agreement that the critical first step is to define performance. Trying to measure performance demands that organizations clearly articulate their objectives and then translate them into measurable desired results, against which performance will be measured. This process itself can be invaluable—the very act of defining the desired results has enormous power to focus the energies of an organization. Unfortunately, this first step is often quite difficult for government agencies. Performance is system specific and depends ultimately on the goals of the wider budget management system. If the budget management system is a traditional one, performance will be defined by measures of compliance and stewardship. On the other hand, if the budget management system is outcomes focused, with success gauged in terms of impact on society, performance will be defined by measures of the effectiveness of outputs produced. The need to refer to the budget system’s basic orientation when defining performance is illustrated in Figure 1, which also attempts to define some specific performance terminology.

Figure 1.
Figure 1.

The Concept of Performance in Different Budget Systems

Figure 1 describes in somewhat abstract terms a government “production process.”38 The process commences from the point of acquiring inputs, progresses to the use of these inputs in a production process, and ends at the production of outputs that have an outcome in the sense of meeting some defined objective of government policy. Arising from this process are a range of possible indicators of performance. Traditional budget systems focus on inputs, the amount of resources actually used, usually expressed as the amount of funds or the number of employee years or both. The key concept is economy, or the aggregate control of input costs at the margin. In output-focused budget systems, inputs are related to an agency’s output to produce indicators of efficiency or productivity.39 In outcomes-focused budget systems, an agency’s outputs are related to the achievement of its ultimate goals, producing indicators of effectiveness. In such systems, costs are often compared with the final outcomes to give measures of cost-effectiveness, sometimes termed value-for-money indicators.

In performance budgeting, therefore, there are a number of ways in which spending can fail to meet expected performance, and in order to specify the cure for such performance failure, it is important to differentiate its source. Four types of performance failure can be identified:

  • Technical inefficiency: Resources are not being employed in the technically best way to produce a given output or service level. The attainment of technical efficiency implies it is impossible to reduce the physical level of any input without reducing the level of output. The failure to attain the maximum possible output level with given inputs has been referred to by Liebenstein as X-inefficiency (1966, p. 394), and must be resolved by improving and changing the internal functioning of organizations and the units that comprise them.

  • Economic inefficiency: Resources are not being employed in the most economically efficient way, so that a higher return in the form of a higher provision of service can be obtained without increasing costs by switching spending between resources. The attainment of economic or allocative efficiency implies that it is impossible to substitute one input for another without increasing total costs of a given output or service level.

  • Technical ineffectiveness: Expenditures are not effective in the sense that although resources are allocated efficiently (both in a technical and an economic sense) to provide a certain service, the service itself does not satisfy the objectives it was designed to meet.

  • Economic ineffectiveness: Expenditures can be efficient (in the sense that resources are allocated to produce the maximum output of a certain service at least cost) and effective (in the sense that the output has the desired outcome), but overall effectiveness in the use of public resources could be increased by cutting some expenditures and reallocating the resources to other services, i.e., becoming more allocatively effective.

Some of these performance concepts are illustrated in Figure 2, where curve HH′ shows all the different combinations of labor and other inputs to produce a given output of health service, say, rural disease prevention. The line AB shows the available budget for this service, which can buy OA units of labor, or OB units of other inputs, or any combination along the line AB. The slope of AB reflects the relative prices of labor and other inputs. If the service delivery is operating at point F, this is technically inefficient. All the points above HH′ use more of at least one input than is required. Technical efficiency can be increased by a more efficient allocation of resources to reach point D. However, even though at point D the service delivery is technically efficient, this may not be economically efficient since costs can be reduced by reallocating the mix of labor and other inputs to reach point E. With this combination of labor and other inputs, it is possible to deliver the same service at least cost.

Figure 2.
Figure 2.

Technical and Economic Efficiency

Suppose the objective is to prevent disease in rural areas by providing the services of rural health clinics. While the service delivered may be technically and economically efficient at point E, it may not be technically effective in the sense it may not actually prevent disease in rural areas. For example, the rural health clinics may not be located within easy access of disease-ridden areas. This would be a technical flaw, to be resolved by health specialists and not economists. However, even if service delivery were technically effective in eliminating disease in the rural population, this does not mean it is economically effective. It may be possible, for example, to attain the same level of disease prevention in a more cost-effective way, through less costly improvements to the rural water supply. In this case, the government would attain better value for its money.40

There are variations in the extent to which different dimensions of performance can be improved through better systems of budget management. As indicated, technical inefficiency is symptomatic of more general shortcomings in the internal organization and procedures of the producing unit. This type of inefficiency, Liebenstein argues, is likely to be the greatest source of inefficiency, particularly in the public sector. It is also the one that is more amenable to improved budget management. To some extent, improved budget procedures can promote economic efficiency. At the same time, the ability of improved budget management to compensate for a lack of effectiveness of expenditure programs is necessarily circumscribed. The task of linking resource allocations to objectives more clearly falls in the realm of policy, and the role of improved budget management tends to be supportive—to facilitate and improve the process of making such policy choices through greater transparency and the provision of timely and relevant performance information.

Defining a Framework for Performance Measurement

Moving from the above, rather schematic description of the dimensions of performance to the practical problem of developing a framework for measuring performance in government requires addressing some key issues, which are discussed in turn: output versus outcome; process indicators; the quality dimension; and relating performance to inputs.

Output Versus Outcome

There is an ongoing debate about the value of focusing on outputs or outcomes (i.e., the impact of the outputs), and the term performance budgeting is used to refer to both output- and outcome-focused budget systems. The desired properties of these concepts are described in Boxes 16 and 17, respectively. The debate over output versus outcome tends to be at two levels. First, there are the evident practical measurement problems. Many outcomes are difficult to measure directly (e.g., greater national security) or are complex, for example, involving interlinkages among a number of different programs and subprograms (e.g., lower morbidity rates). Second, at a higher level, there is controversy over what managers should be held accountable for. On practical grounds, an output is generally under an agency’s control, while an ultimate outcome is often determined by external factors that are unpredictable. Also, observed outcomes can be interpreted in different ways and can be considered to include side effects, whether intended or not. Thus the Office of Management and Budget (OMB) in the United States focuses on intended outcomes,41 while other countries such as Australia report ex post on the total outcome, intended and unintended. This has led some to differentiate outcome, which is defined by intended effects, from impact, which includes the total effect of the agency’s actions. Others differentiate among different levels of outcome or, like Hatry (1999), differentiate between intermediate and end outcomes (pp. 15ff). For example, some countries including the United States and the United Kingdom make a distinction between different levels of outcomes depending on the time horizon.

Desirable Properties of Outputs

Outputs are generally under an agency’s control to some extent. To be effective for performance budgeting, outputs to be measured should—

  • be a good or service provided to individuals/organizations external to the agency.

  • be clearly identified and described.

  • be for final use and not for an internal process or an intermediate output.

  • contribute to the achievement of planned outcomes.

  • be under the control (directly or indirectly) of the agency.

  • generate information on attributes of performance—price, quantity, and quality.

  • generate information that is a basis for performance comparisons over time or with other actual or potential providers.

Example:

Output: policy advice

Performance measure: briefs or submissions prepared

Quantity: number

Quality: satisfaction of minister and staff; other assurance tests

Desirable Properties of Outcomes

Outcomes are generally complex, are determined by a number of factors, are difficult to measure, and involve elements outside an agency’s control. To be effective for performance budgeting, outcomes to be measured should—

  • adequately reflect the government’s objectives and priorities.

  • be indicated by the impact on the community.

  • be differentiated from the agency’s strategies to which they contribute.

  • clearly identify the target groups, if so focused.

  • be achievable in the specified time frame.

  • be suitable for monitoring and progress assessment.

  • be the result of an identifiable causal link with the agency’s output.

  • be clearly defined and described so as to be easily reported externally.

Example:

Outcome: Ensuring young homeless people have access to appropriate accommodation.

Performance target: 90 percent of young homeless people have access to appropriate accommodation within 24 hours.

Operationalized by clearly identified target group, definition of “appropriate housing,” and causal link between agency action, such as assistance through a subsidy.

Although performance in government is clearly subject to different interpretations, it does appear that in the OECD countries there is increasing recognition that the output approach has limitations and may deflect the attention of agencies away from the impact of their programs. From the government viewpoint, impact is the more relevant concept.42 As a consequence, more and more OECD member countries are adopting an outcome-focused approach, though many preceded this move with extensive use of output measures.43

In preparing the budget, it is necessary to define and agree on performance targets for each output measure. These should be chosen selectively, with priority on clarity in their interpretation, so that they provide an undisputed indicator of the degree of output (and even outcome) achieved in any time period. The approach is to specify the output as a rate and to specify the frequency with which the target will be met. Additionally, it is useful to set out the information on how each target will be estimated, perhaps based on actual performance data for previous periods, as well as to identify the implications for the cost of the output if there is a significant divergence between expected and actual target achievement. The latter will provide a platform from which the operating unit can analyze and explain any variances in performance.44 The United Kingdom, in designing its Public Sector Agreements (see Box 20), has emphasized the importance of setting meaningful targets alongside agencies’ budgets. This, it is argued, assists managers in improving their agency’s performance, allows the center to identify policies and processes that work, and enhances public accountability (Hill, 2004). To use the prevalent acronym, these targets should be “SMART”: specific, measurable, achievable, relevant, and timed. However, with the shift in focus from outputs to outcomes, the technical problems of measurement have increased, and setting targets for some outcomes has become inherently more difficult.45

Process Indicators

It is also important not to neglect the process that creates outputs and outcomes. Indicators of performance, such as workload, throughput, and work rate, give important measures of the technical efficiency of an agency’s operations. Workload indicators include the amount of work that comes into a program and work in progress but not yet completed.46 While amounts of work by themselves are not outputs or outcomes, workload data can be used as inputs to produce output data: for example, the amount of work not completed can be a proxy for delays in service to customers, a quality dimension of output (discussed below). There are some activities for which it is difficult to measure outputs or outcomes but for which process-related measures may be useful in assessing performance. The major gains to be made in budget system reform are often made in improving processes and eliminating the so-called invisible X-inefficiency in government operations.

The Quality Dimension

Quality is an important component of the efficiency of service provision, where efficiency is measured by the ratio of outputs to inputs and therefore can be improved by reducing the quality of the output. If outcomes are tracked, a more accurate indicator of efficiency becomes possible by including a quality dimension, which can be broken down into various characteristics, as shown in Box 18. Some regard quality characteristics to be an output, others (Hatry, 1999) designate them as a special type of intermediate outcome, since quality characteristics indicate how well the service was delivered but do not indicate the results of the service after delivery. All agree, however, that quality of service is an important dimension to measure and to track, although the mode of tracking is often disputed. Some governments have questioned the value of internal measures of quality (e.g., Denmark) and have emphasized instead client surveys of the level of service satisfaction. Both approaches are important: client surveys certainly provide an important “reality check” on formal indicators of quality that are generated internally. Moreover, an important question remains: since there are many dimensions to quality, who decides what qualitative characteristics are important for a particular service?

Relating Performance to Inputs

Ultimately, resource allocation decisions require that output and outcome measures be related to costs. This is often difficult for four reasons. First, accounting systems are not set up to bring cost data together on a program basis and hence data on inputs is not readily available on a timely basis.47 Often in emerging economies, especially in transitional economies, this is aggravated by the fact that cost accounting systems are not developed and there is little expertise to carry out this work.48 Second, the work is often not straightforward. Allocating costs to programs can be a major problem in agencies with large central overhead budgets and for activities involving delivery of more than one service.49

Typical Service Quality Characteristics

  • Timeliness in service provision.

  • Accessibility and convenience.

  • Accuracy of the assistance.

  • Courtesy in service delivery.

  • Adequacy of information dissemination to potential users.

  • Condition and safety of facilities.

  • Customer satisfaction.

Source: Hatry (1999, p. 17).

OECD Practices: Use of Performance Information

article image
Source: OECD (2003, Tables 5.4.a, 5.4.b).

Third, there may be limitations in the accounting system, especially if it is cash based, that prevent the full costs of capital from being recorded (i.e., if depreciation is not included).50 Fourth, there is a need to focus on marginal rather than average cost, and this is typically more difficult to measure. Confusing average efficiency with marginal efficiency may cause extra resources to be allocated to programs with proven average efficiency but for which additional resources will have little impact on output. These practical costing problems are important because feasible performance measures are those that can be established at reasonable cost, which may well depend on the ready availability of reliable cost data. However, despite all such concerns, it is clear that the OECD countries have placed considerable emphasis on performance information in their budget processes (Box 19).

Experience in Using Performance Measures

Performance measurement is a key tool in the process of improving the delivery of public services, but it should be viewed not as an end in itself but as part of a wider performance management reform process. Performance measurement has become so popular it has tended to lead the reform, and while many benefits flow from the mere act of trying to measure performance, to be fully effective, performance measurement must be integrated into a performance management system. Failure to move from performance measurement to performance management is cause for some concern because of the dangers of overreliance on performance measures; inappropriate measures; misuse and misinterpretation; and information overload and lack of selectivity.

The Danger of Overreliance on Performance Measures

Performance measurement has at least three limitations.51 First, performance data do not by themselves tell why the outcomes occurred—that is, the extent to which the program caused the measured results—because there is unlikely to be any practical counter-factual. A performance measure assumes some causality between a program’s activities and its outcomes, but one which can only be confirmed ex post through a properly designed evaluation study.52 Second, some outcomes cannot be measured directly. The most obvious example is success in preventing undesirable events. Third, the information provided by performance measures is just part of the information that is needed by managers and elected officials to make resource allocation decisions. Performance information may provide a clearer understanding of what is expected and what has been accomplished, but other supporting information is required. For example, the costs of providing goods and services by alternative approaches can be derived from agency financial and resource information or from benchmarked cost data.

In sum, it might be best to view performance measurement as necessary but not sufficient to determine what should be done. This implies that managers cannot rely solely on performance measurement for decision making.53 It is interesting to note that the private sector is moving away from a narrow focus on outcomes measurement toward a broader approach, as evidenced in the “balanced scorecard” (Kaplan and Norton, 1996) which has also attracted some interest within OECD governments.54

The Danger of Inappropriate Measures

Measuring performance is not an easy job, and it gets even more difficult as measurement moves from program outputs to program outcomes. In this paper, there has been a deliberate use of the term performance measurement rather than performance indicator. When dealing with processes and outputs, there is greater assurance over consistency in the interpretation of the measures of performance. When one deals with outcomes, however, direct measures are often difficult, hence the measures can only indicate the outcome rather than directly measure it. Often, it takes more than one indicator to adequately capture an outcome, further complicating interpretation.55 Not surprisingly, performance indicators tend to be subject to a greater degree of interpretation and hence are more open to abuse as well as to the danger of being inappropriate for the purpose at hand. A concern about the use of performance measures is that the measures displace the actual outcomes to become an agency’s objectives. This “goal displacement” can often lead to emphasizing the wrong activities (e.g., those that are easier to measure) and to an agency’s energies being spent in improving “the numbers” without improving actual outcomes.56 An obvious example is the usual bias of focusing performance measurement on the short term, for ease of measurement, despite the distinct danger that short-run beneficial outcomes may result in increased costs over the longer run, with an undesirable shifting of costs into the future.

The Danger of Misuse and Misinterpretation

Performance measures require careful interpretation by those with adequate knowledge of the different factors that affect the measures. This need for interpretation brings its own dangers. No matter how clearly defined, performance indicators are invariably recorded and interpreted in very different ways by different people. Moreover, when people feel the future of a program is dependent on an indicator, a positive bias comes into play—they inevitably interpret the indicator in the way that is most favorable to the agency. Perhaps of more concern is the danger that performance measurement is introduced without the managers’ commitment to the new performance management system, which often occurs when such systems are imposed from above or from the outside (e.g., by the legislature). Experience has shown that when managers and staff are required to collect and compile performance measures that they deem inappropriate or of little use to the agency, they become cynical about performance measurement and more likely to seek ways to use it to their own ends. Often, the greater the pressure from above to use performance measures, the higher the stakes become and the greater the incentives for people to identify self-serving performance indicators as well as to report misleading performance data. Hence the need to provide managers with incentives to buy in to the new performance measurement system, for example, by offering increased managerial flexibility.

The Danger of Information Overload and Lack of Selectivity

Performance indicators can be irrelevant for decision making, even if they are accurate. The essence of performance measurement is to reduce the multidimensional aspects of the program’s outcome to a few measurable indicators.57 This often leads to oversimplification and to counterpressure to employ multiple measures of performance. However, from a decision-making viewpoint, this may not be a solution—more data makes decisions more informed but not necessarily easier.58 This may also result in confusion among different types of indicators and their use. For example, the use of a weighted average of different measures may lead to misleading aggregate indicators as critical subgroup differences are lost in the aggregation process.

Guidelines for Using Performance Measures

How to guard against the above dangers? The experience of practitioners in performance measurement points to some guidelines.

  • Aim for clarity in purpose. It is necessary to be sure about who will use the performance information, how they will use it, and why they will use it. The potential users range from the program manager, to the central evaluator in the MoF, to the member of a legislative watchdog committee, to the final consumer of the government service. Each of these potential users has different objectives, for example, making better resource allocation decisions, improving services, increasing accountability. Measures should only be chosen after users are identified and their objectives properly defined. The ultimate aim should be to help the users reach more informed conclusions and to make better decisions about service performance.

  • Focus on core information. Once the purpose has been clarified, there should be a conscious effort to prioritize among different performance measures to avoid information overload. It is usually recommended that in the first instance the focus should be on the priorities of the entity, i.e., its core objectives and service areas. Often it takes substantial time and effort to establish success criteria for these priorities and objectives without widening the scope of performance measurement.

  • Align with the practical needs of the agency. Make performance information as relevant as possible to the agency. This requires that performance measurement be aligned with the objective-setting procedures and the performance review process of the agency. In this way, those who collect and process the data can appreciate and relate its use to the agency’s procedures. Of course, this does not mean ignoring the multiple uses of performance information; there should be links among the performance measures used at an operational level, those used by agency’s management, and those used by higher-level government officials, say in the MoF.

  • Balance the perspective on performance. Recognize that different activities create different needs for performance measurement. Narrow program activities perhaps can be covered by a very few measures, but complex program activities may need a wide range of performance measures balanced over the different aspects of the services. It may be desirable to have a mix of internally generated and external measures. Obviously, meeting this need may involve trade-offs with other criteria, such as focus and alignment.

  • Review the performance measures. Selected performance measures should be regularly reviewed and updated to reflect changes in priorities and shifts in the focus of public policy over time. Agency objectives change, priorities among objectives change, different users emerge, and hence the required balance of performance information necessarily changes. A performance measurement system that is static is not likely to fulfill its function as part of a performance management system.

  • Ensure the robustness of basic information. Perhaps most important, performance measures should be based on reliable and timely data. The basic raw data should be robust, in the sense of being derived in a way that is verifiable, free from bias, and preferably comparable over time and among different organizations. Data should also be capable of being derived in a time frame compatible with the needs of decision makers.

The U.K. Government’s Public Service Agreement (PSA) Framework

Beginning in 1998, along with other reforms, the PSA set measurable targets for a full range of government objectives. Departmental PSAs include targets on inputs, outputs, and outcomes. Since its inception, the number has been substantially reduced, from 387 in 1998 to 130 in 2002, and a progressively higher proportion have been outcome focused.

Objectives of the PSA

  • Explain what departments plan to deliver.

  • Set out national targets.

  • Reflect the government’s key priorities in terms of outcomes.

  • Represent agreement between the government and the public.

Five features

  • Begin with an aim that sets at a high level the role of the department.

  • Objectives set out in broad terms what the department intends to achieve.

  • Performance targets set clear outcome-focused goals for priority objectives.

  • Establish a value-for-money target for each department, focused on improving cost-effectiveness.

  • State who is responsible for the delivery of these targets (usually the relevant secretary of state).

Supplemented by

  • Technical Notes: detailed documents that set out how the PSA targets are defined, measured, and interpreted.

  • Service Delivery Agreements (SDAs): These were introduced in 2000, with a high-level summary of delivery plans to meet targets in 2004; replaced by Service Delivery Plans in 2003. More detailed department plans focus on actions required to meet targets, role of key delivery agents, resources required, risks, how interim progress is to be measured and monitored, and strategies with forward projections of progress.

  • Department Reports: Accountability is enforced by departments reporting annually in the spring on their spending and performance against PSA targets. Recently, Autumn Performance Reports were also introduced. A Web site has been created as a single portal to all department performance documents.

Source: H.M. Treasury (2004).

Establishing a Performance Information System

If performance measurement is to be more than simply an add-on to the traditional budget process, it should become an essential part of the process of performance budgeting. In turn, performance budgeting must be viewed as an integrated method of allocating resources. The central idea is to link resource allocation explicitly to outputs or outcome—performance—and this should be recognized as involving more than just budgeting. Rather, it encompasses the entire budget management process in supporting and demanding better performance: budget planning, budget execution, reporting, and auditing. In this way, performance measurement must be fully integrated into a performance management system, which in turn entails establishing a performance information system. Box 20 outlines the U.K. experience with its Public Service Agreements, which highlights the importance of such a system.

An effective performance information system requires that managers develop their performance information to ensure the following management needs are met:59

  • Performance measures should be clearly linked to intended objectives and should enable ready assessment of performance in terms of effectiveness, efficiency, and service quality. Needless to say, they must be feasible, accurate, and derived in a cost-effective way.

  • As far as possible, a core set of performance data should be identified for routine collection. The goals should be to minimize the cost of performance management and ensure that performance information is relevant and understandable to the organization and its stakeholders.

  • The continued appropriateness of performance information should be regularly assessed, considering such factors as relevance, cost, value, and usefulness to decision makers.

  • Responsibilities for performance measurement and reporting need to be clearly defined and understood, including whether services are delivered by the agency, outsourced, or produced by other third-party arrangements.

  • Monitoring and periodic evaluation of performance should be balanced with other operating demands and used appropriately. It should be expected that performance will be monitored on a continuing basis and complemented by periodic evaluation, generally on at least a five-year cycle and preferably on a three-year cycle.

  • Performance management activities should be integrated into business planning processes that can be fully supported by the performance management information system.

Two requirements of such an information system are worth emphasizing: the need for a reporting structure and the need to ensure data quality.

Reporting Structure

Performance management requires not just measuring performance but reporting on it. From the outset, there is a need to consider the structure of the data capture and the ultimate presentation of the data as management information to different levels of users who can be expected to have different reporting needs. Unfortunately, performance measures are often designed and used by agency managers to meet their own needs, without regard for the information needs of other users, such as citizens or their elected representatives. To encourage accountability and better governance, citizen participation is to be encouraged. Lack of citizen involvement can undermine the value of performance measurement by minimizing its importance in the eyes of elected officials.

Some general principles about reporting on performance are indicated in Box 21. The performance measures should be reported regularly and consistently and should allow performance to be measured against the standards chosen. This means data could cover: actual performance compared to the plan, actual performance compared to a predetermined standard, actual performance compared to actual performance of peers, or actual performance compared to performance in previous periods. There are great benefits in benchmarking in this way, especially in allowing external comparisons to some predetermined standard of good practice or to a relative benchmark such as peer group activity.

Key Principles for Reporting Performance Information

  • Be Open: Feed back the results and explain the reasons for collection of data and the use(s) to which it will be put.

  • Be Selective: Do not report all measures to everyone, so as to avoid overloading users with information that is not relevant to them.

  • Be Focused: When a specific issue is under review, it is necessary to report only the measures relevant to that issue.

  • Be Proactive: Take action to indicate when a response or an action is required as a result of the information being provided.

  • Be Pragmatic: Concentrate on what can be influenced.

  • Be Reasonable: Take or suggest action that is reasonable in the circumstances.

A System of Data Verification

Performance measurement must be credible. It is not enough to introduce performance measurement without the added assurance of internal controls and data verification.60 When elected officials and administrators trust the reliability of performance information, they are more likely to use that information for decision-making purposes. The value of performance information is undermined if its accuracy cannot be verified. Given the need, who will carry out this data verification? The most obvious candidates are internal audit staffs or external auditors, and typically a cooperative effort is recommended. This verification should be performed at least annually on a rotational basis so that all programs are covered in, say, a three-year cycle.

Developing a Performance Management System

It takes time to introduce a comprehensive system of performance management. The ultimate objective is to put in place a system to match costs with activities, to measure performance of these activities, to develop standards of performance, and to compare costs and performance levels with those standards. The challenge of this approach is to link performance information to the budget process and the allocation of resources. International experience shows that until this connection occurs, performance can be merely a regular reporting requirement, not directly relevant to day-to-day management and budgeting.

How to develop such a performance management system? Six key steps are described here, all of which require developing and strengthening performance measurement capacity.

  1. Clarify the objectives of programs and hence better define how their performance will be judged.61

  2. Provide a stronger link between inputs into the production process and monitoring the achievement of outcomes.

  3. Present information on planned performance (usually in budget documents) and actual performance (in documents such as annual reports) on a consistent basis.

  4. Make performance information relevant in resource allocation decisions.

  5. Provide incentives for managers to use performance measures in making day-to-day management decisions.

  6. Monitor the performance of management in reaching their program objectives to hold them accountable and to allow for corrective action.

Step One: Improve the Definition of Programs and Their Objectives

As argued in Section III, well-defined programs must be anchored in a strategic plan, which should incorporate the aspects indicated in Box 22 and should be formulated for implementing units.

The term of the strategic plan should be at least five years, consistent with an MTBF, and should be periodically updated. As indicated in Section III, annual implementation plans, or operating plans, should be based on the strategic plan to provide a direct link between the long-term goals of the strategic plan and the goals identified in budgets and to serve as a point of reference for annual progress evaluations. The main features of an operating plan are described in Box 23. In the initial period of a performance management framework, there may be a role for a central advice unit to assist agencies in making the required changes outlined here.

Step Two: Provide a Stronger Link between Budgetary Inputs and Program Outcomes

Since the budget remains the chief motivating force in determining the allocation of resources to meet the objectives of program management, the performance targets should be matched with full annual budgetary costs. This provides not just the information but the incentives for budget managers to make efficient tradeoffs in allocating resources to meet the program targets.

Elements of the Strategic Plan

  • Comprehensive description of the agency’s mission, including the organization’s main functions and operations.

  • General or strategic goals and objectives for the organization’s main functions and operations.

  • Description of the guidelines to be followed to attain the goals and objectives.

  • Identification of external factors crucial to the organization which are beyond its control and which could have a significant impact on the attainment of general goals and objectives.

  • Description of program evaluation.

  • Procedures used to define or revise general goals and objectives.

Each agency should be required to identify the linkages between financial and nonfinancial inputs to their production of goods and services and the outcomes they have identified, or are in the process of identifying in redefining their programs. Examining outcomes associated with programs in isolation from the direct inputs required for their daily operation reduces the relevance of performance measures. Agencies do not necessarily have to identify all the outputs of the production process up front; this will come later as managers gain more experience in measuring performance.

The linking of inputs to program outcomes should be a joint exercise of the MoF and the staff of the relevant ministry because only they have the detailed knowledge to provide this essential connection. The outcomes associated with the programs should then be agreed with the head of that ministry.

The Operating Plan

The operating plan should be formulated annually and should include the following:

  • Operating objectives defining the targeted level of program implementation, i.e., outputs.

  • Output goals expressed in an objective, quantifiable, and measurable manner.

  • Description of how annual goals or operating objectives will relate to the general goals of the strategic plan.

  • Indicators for use in assessing the value of relevant products, levels of service, and the results of each program activity.

  • Bases for comparing program results with established implementation targets.

  • Methods to be used in checking and validating the measurements obtained.

Step Three: Make Performance Information Relevant

It is common to find program structures with defined measures of performance that have little impact on the allocation of resources. Goals or objectives may be set, but this often occurs within the service ministry, after the budget has been passed. As a result, there is little impact on the allocation of resources among major functional areas of the budget, such as education and health. Furthermore, goals or objectives within a ministry often only impact the allocation of resources at the margin, with the bulk of resources allocated without regard to performance.

To force a break with this approach, a phased introduction of the new system is recommended. In the first stage, major ministries and service agencies should provide an agreed set of outcomes of their programs six months prior to the processing of the budget, with clear linkages to the associated financial and non-financial inputs. This allows the MoF to examine the total level of resourcing in light of the proposed outcomes of the programs and to feed that information into the formulation of the budget. Although this data may not have a significant impact on the determination of the budget in the absence of associated performance measures, this is an important transitory phase. In the next stage, all agencies should provide similar information six months prior to the budget, together with a full set of performance measures that clearly shows how each agency has performed during the past 12 months. This would be the first point at which the budget would reflect the relative allocation among ministries and service agencies and would be directly impacted by their performance.62

Step Four: Present Performance Information on a Consistent Basis

The process of publicly releasing performance information six months prior to the budget must be comprehensive, so that all agencies release information on a full set of programs and associated outcomes, with clear linkages between inputs and performance indicators. The central budget documentation should include a summary of performance information published by agencies and a clear indication of where resourcing has been modified based on good or poor performance. It is recommended that the publication of such information be an annual process to raise the accountability of agencies and their managers for the efficient and effective use of public funds. This accountability should not, however, be restricted to an annual reporting process. All documents released publicly by agencies about their operations should include performance measures from the date they begin to be used. Alongside this regular reporting, the results of periodic evaluation should also be published. Indeed, it is recommended that all agencies conduct periodic reviews of their programs, as previously indicated, preferably so that all programs are reviewed every three years.

Step Five: Provide Incentives for Managers to Use Performance Information

To clearly establish performance targets as a permanent feature of public sector management, a process should be put in place for concluding performance agreements with agencies. Agencies’ performance agreements should be included in their business plans, commencing with the major ministries and service agencies and then progressively including all agencies. However, incentives do not work if they are purely negative, for example, budget cuts. There must also be an appropriate set of rewards for good managers. Those managers that are innovative in the provision of goods and services and at the same time provide savings to the budget should be identified and appropriately compensated. This may take the form of allowing them more flexibility in decision making. Initiatives in the introduction of performance-based pay should also be encouraged. Employees should be given the opportunity to enter into workplace agreements with management, whereby performance bonuses are paid for an agreed level of performance or achievement of some operational or strategic objectives. This could extend to individual employees, reducing their base rate of pay and replacing it with performance-based pay.

Step Six: Develop a System to Monitor Program Management

In this new performance management system, the MoF should define the general features of the monitoring system to be adopted by the units and service agencies that manage budget programs and should also outline the information to be reported periodically. Such data include progress in attaining objectives; costs incurred; and the most significant physical and financial discrepancies vis-à-vis indicator-based estimates. There should also be an explanation and examination of the causes of discrepancies, classified as internal if attributable to management, or external in all other cases. Ideally, this should be accompanied by a description of corrective measures to be adopted or proposed, as appropriate. Performance measures should have clearly defined characteristics to ensure the provision of sufficient and relevant information for taking management decisions. Some current OECD practices in the use of performance measurement in budget decision making are summarized in Box 24.

OECD Practices: Performance Management in Government

article image
Source: OECD (2003, Table 5.4.c).

Concluding Remarks

These steps toward a performance management system are directed to setting up a clear accountability framework for budget managers that will lie at the heart of the new performance budgeting model. This accountability framework is based on these key elements:

  • a clear ex ante specification of the performance expected of each agency head;

  • agreed ex ante arrangements for the collection of all the information required to assess performance;

  • incentives and sanctions to encourage agency heads to act in the government’s interests;

  • a clear performance assessment process involving ex post reporting of actual performance against the initial specification; and

  • devolution of decision-making authority to give agency heads the degree of managerial autonomy they need to achieve the tasks assigned to them.

This reformulated accountability arrangement represents a fundamental change toward a more devolved system of budget management whose effects are quite far-reaching. However, for this reorientation to be fully effective requires that certain preconditions are met, and its implementation may need to be facilitated by a reformulated institutional structure. Discussion of these complementary changes are the subject of the remainder of this study and will emphasize the following points. First and foremost, before the new budget management model can be introduced, certain basic safeguards should be put in place to ensure that PEM (public expenditure management) systems are adequate to deal with the new demands that will placed upon them. This typically requires a major upgrading of management systems, which will be discussed more fully in the following two sections. Second, the new devolved system of decision making may require parallel changes both in budget institutional arrangements and the budget system’s regulatory framework. This aspect of budget reforms will be taken up in Sections VII and VIII.

37

“Organizations that measure the results of their work … find that the information transforms them” (Osborne and Gaebler, 1992, p. 63). See also Kristensen, Groszyk, and Bühler (2000).

38

The use of a simple production model to characterize the operation of a government unit is usually regarded as the foundation for assessing performance and has a long tradition; see Brace and others (1980); Jackson and Palmer (1989); Boyne and Law (1991); and Hyndman and Anderson (1995). However, others have questioned its adequacy, especially for complex public programs; see Osborne and others (1995). Others have been more critical of the fundamental production model, for example, stressing the differential access to information of ruling groups, hence using “political models” of performance assessment (Pollitt, 1993) or “organizational” models (Kotter and Heskett, 1992).

39

Where the ratio of input to output defines efficiency, and the reciprocal ratio of output to input defines productivity.

40

From this brief review of the main concepts and issues in the performance literature, it is perhaps possible to find sympathy for the conclusion that “the notion of performance—often bereft of normative standards, invariably full of ambiguity—is, in theory and practice, both contestable and complex” (Carter, Klein, and Day, 1992, p. 50).

41

For a full description of the Government Performance and Results Act (GPRA) performance management framework and its specific terminology, see Groszyk (2001) and Anderson (2004).

42

See Kristensen, Groszyk, Buhler (2002, p. 1). At the same time, several writers, notably Pollitt (1986), suggest that the emphasis on outcomes over outputs (and the broader emphasis on economy and efficiency rather than effectiveness) may reflect the political interest of a government that is primarily concerned with cost-cutting rather than performance evaluation.

43

“Australia, Netherlands, and New Zealand began by concentrating on outputs and are now moving to an outcomes approach. France recently passed a law that requires the production of outputs and outcomes in budget documentation for the majority of programmes.” (OECD, 2004, p. 7) It should be noted that while Australian states began with a focus on outputs, the federal government specifically rejected output budgeting, preferring to move directly to outcomes.

44

For a much fuller discussion, with examples, see OECD (2000).

45

Henderson (2004) contrasts a straightforward target such as “reduce substantially the mortality rates by 2010 from heart disease by at least 40% in people under 75; from cancer by at least 20% in people under 75 …” (Department of Health); with, for example, “Improve effectiveness of the UK contribution to conflict prevention and management as demonstrated by a reduction in the number of people whose lives are affected by violent conflict and a reduction in potential sources of future conflict …” (FCO, Ministry of De-fence, DfID).

46

Performance measurement is even applicable to internal support services, but the outcomes from internal support occur within the organization, and it is impossible to estimate the impact these services have on outcomes of external services.

47

The problem of the measurement and allocation of costs is necessarily limited when government units use cash accounting principles. In the United Kingdom, one of the problems encountered in enforcing accountability in the Next Steps Agencies, created in the late 1980s and early 1990s, was the lack of information on unit costs. This arose from their lack of commercial-style accounts as well as from the undeveloped nature of their costing systems. A related problem was that the measure used as the cost object was rarely the ultimate result, but rather was an intermediate output measure related to activity. A full discussion is contained in Pendlebury, Jones, and Karbhari (1994).

48

Discussed more fully in Section V.

49

It has also been argued that the increasing complexity of government operations has significantly contributed to the problem. Traditional cost accounting is adequate when processes are simple. However, as technology advances in scale and complexity, the cost profile of government organizations becomes significantly more complicated. Costs that traditionally have been considered overheads now represent activities critical to the delivery of government services. It is increasingly difficult to associate these costs directly with individual programs or customers. See Gearhart (1999).

50

Discussed more fully in Section VI.

51

See Hatry (1999, pp. 4ff); Perrin (1998, p. 374).

52

“Performance indicators are no substitute for the independent, in-depth qualitative examination of the impact of policies which evaluations can provide” (OECD, 2004, p. 15). Due to the costs involved, these necessarily have to be used sparingly and guided by some cost-benefit principle.

53

As a consequence, it is often recommended that any ongoing monitoring of performance be supplemented with periodic program evaluation or reviews. That is, “an effective performance information system will include an appropriate mix of on-going performance information and periodic evaluations” (Australian National Audit Office, 1996, p. 3). The latter allows a wider range of information and stakeholder perceptions. From the U.S. perspective, see Joyce (1993, p. 10).

54

The balanced scorecard looks at a wide range of measures, including difficult-to-measure factors such as a company’s focus on innovation and learning. For example, Osborne and others (1995), would include as an important dimension of performance in social programs indicators of the process of “learning lessons” and “empowerment” of communities (pp. 31ff).

55

Perhaps not surprisingly, it is often found that the development of performance indicators has been fastest in the least problematic areas where government units have clearly defined and rather narrow functions, and that the problems of measuring performance increase with the complexity of government activities. See Carter, Klein, and Day (1992).

56

The PEM literature is replete with examples of poor or dys-functional performance measures: a tax office measuring the cost per application of revenue collected, which might encourage the office to leave difficult cases aside; a hospital using cost per occupied patient bed, which could encourage managers to retain patients to ensure no beds are unoccupied; rating a school’s performance based on examination success rates, which might lead to schools discouraging low performers from joining the school.

57

“Performance measurement should aim at developing a limited number of well-chosen, stable measures, so that a track record of an organization’s performance over time can be built up. This does not mean that performance measures are defined once and for all; they may need to be modified to take into account changes in the managerial context and the environment in which the organization exists.” (OECD, 1994, p. 14)

58

“There are limits on how much information decision-makers can make good use of: people have ‘bounded rationality’ and so do organizations” (OECD, 2004, p. 4).

59

The need to amend management information systems to meet the needs of managers working in a performance environment is discussed in Section V.

60

This issue of quality control in performance measurement is dealt with more fully in Wholey (1999).

61

See the interesting discussion on the importance of defining objectives as one of the main obstacles found when measuring performance of U.S. federal agencies (U.S. Congressional Budget Office, 1993).

62

During this stage, ministries could also use these performance measures to determine the allocation of up to 10 percent of resourcing among service areas. It may also be useful to recommend that the percentage of resource allocation subject to satisfactory performance of managers be increased progressively by at least 5 percent each year, in order to provide a clear message to managers that performance information is important and relevant in deciding on the allocation of budget resources.

The Challenges and the Reform Agenda
  • Abrahams, M.D., and M.N.Reavely, 1998, “Activity-Based Costing: Illustrations from the State of Iowa,” Government Finance Review, Vol. 14, No. 2, pp. 1520.

    • Search Google Scholar
    • Export Citation
  • Accounting Standards Board (ASB), 2003, “Statement of Principles for Financial Reporting: Proposed Interpretation for Public Benefit Entities(London: ASB). Available at: http://frc.org.uk/asb/publications/documents.cfm?cat=3

    • Search Google Scholar
    • Export Citation
  • Allen, R.I.G., 1996, “Management Control in Modern Government Administration: An Introduction,” in Management Control in Modern Government Administration: Some Comparative Practices, SIGMA Paper No. 4 (Paris: Organization for Economic Cooperation and Development).

    • Search Google Scholar
    • Export Citation
  • Allen, R.I.G., and Daniel Tommasi, eds., 2001, Managing Public Expenditure: A Reference Book for Transition Countries (Paris: Organization for Economic Cooperation and Development, SIGMA Program).

    • Search Google Scholar
    • Export Citation
  • Allen R., S. Schiavo-Campo, and T.C. Garrity, 2004, Assessing and Reforming Public Management: A New Approach (Washington: World Bank).

  • Anderson, B., 2004, “Performance Budgeting and Performance Information in the United States,” paper presented to International Seminar on Performance Budgeting, Brasilia, Brazil, March (unpublished).

    • Search Google Scholar
    • Export Citation
  • Athukorala, S. Lakshman, and Barry Reid, 2003, Accrual Budgeting and Accounting in Government and Its Relevance for Developing Countries (Manila: Asian Development Bank).

    • Search Google Scholar
    • Export Citation
  • Auerbach, Alan J., Jagadeesh Gokhale, and Laurence J. Kotlikoff, 1991, “Generational Accounts: A Meaningful Alternative to Deficit Accounting,” in Tax Policy and the Economy, Vol. 5 (Cambridge, Massachusetts: MIT Press), pp. 55110.

    • Search Google Scholar
    • Export Citation
  • Auerbach, A.J., L.J., Kotlikoff, and W. Leibfritz, 1999, Generational Accounting around the World (Chicago: Chicago University Press).

  • Australian Department of Finance and Administration, 1999, Commonwealth Accrual Budgeting Guidelines (Canberra).

  • Australian National Audit Office, 1996, Performance Information Reviews (Canberra: Department of Finance).

  • Axelrod, D., 1995, Budgeting for Modern Government (New York: St. Martin’s Press, 2nd ed.).

  • Babunakis, M., 1976, Budgets: An Analytical and Procedural Handbook for Government and Nonprofit Organizations (Westport, Connecticut: Greenwood Press).

    • Search Google Scholar
    • Export Citation
  • Baldacci, E., and A. Corbacho, 2004, “Recent Experiments with Fiscal Responsibility Laws around the World—What Lessons Can Be Learned?FAD Draft Guidance Note (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Barrett, K., and R. Greene, 2000, “Truth in Measurement,” Governing (July), p. 86.

  • Baumol. William J., 1982, “Contestable Markets: An Uprising in the Theory of Industry Structure,” American Economic Review, Vol. 72, No. 1, pp. 115.

    • Search Google Scholar
    • Export Citation
  • Baumol. William J., J.C. Panzar, and R.D. Willig, 1982, Contestable Markets and the Theory of Industry Structure (New York: Harcourt Brace Jovanovich).

    • Search Google Scholar
    • Export Citation
  • Bellamy, S., and R. Kluvers, 1995, “Program Budgeting in Australian Local Government: A Study of Implementation and Outcomes,” Financial Accountability and Management, Vol. 11 (February), pp. 3956.

    • Search Google Scholar
    • Export Citation
  • Blondal, J.R., 2003, “Accrual Accounting and Budgeting: Key Issues and Recent Developments,” OECD Journal on Budgeting, Vol. 3., No. 1, pp. 5171.

    • Search Google Scholar
    • Export Citation
  • Boston, J., ed., 1995, The State under Contract (Wellington: Bridget Williams Books, Ltd.).

  • Boyne, G., and J. Law, 1991, “Accountability and Local Authority Annual Reports: The Case of Welsh District Councils,” Financial Accountability and Management, Vol. 7, No. 4, pp. 17994.

    • Search Google Scholar
    • Export Citation
  • Brace, P., R. Elkin, D.D. Robinson, and H.E. Steinberg, 1980, “Reporting of Service Efforts and Accomplishments,” FASB Research Report R09 (Norwalk, Connecticut: Financial Accounting Standards Board).

    • Search Google Scholar
    • Export Citation
  • Brook, P.J., and M. Petries, 2001, “Output-Based Aid: Precedents, Promises, and Challenges,” in Contracting for Public Services: Output-Based Aid and Its Applications, ed. by P.J. Brook and S.M. Smith (Washington: World Bank), pp. 311.

    • Search Google Scholar
    • Export Citation
  • Brook, P.J., and S.M. Smith, eds., 2001, Contracting for Public Services: Output-Based Aid and Its Applications (Washington: World Bank).

    • Search Google Scholar
    • Export Citation
  • Broom, C.A., 1995, “Performance-Based Government Models: Building a Track Record,” Public Budgeting and Finance, Vol. 15 (Winter), pp. 317.

    • Search Google Scholar
    • Export Citation
  • Brumby, J., 1999, “Budgeting Reforms in OECD Member Countries,” in Managing Government Expenditure, ed. by S. Schiavo-Campo and D. Tommasi (Manila: Asian Development Bank), pp. 34962.

    • Search Google Scholar
    • Export Citation
  • Brumby, J., 2005, “Budget Management Reform in China,” in Engaging the New World: Strategic Economic Studies in the Knowledge Economy, Employment, Health Care and Fiscal Governance, ed. by B. Grewal and M. Kumnick (Melbourne: Melbourne University Press, Melbourne).

    • Search Google Scholar
    • Export Citation
  • Brunsson, K., 1995, “Puzzle Pictures: Swedish Budgetary Processes in Principle and Practice,” Financial Accountability and Management, Vol. 11, No. 2, pp. 11125.

    • Search Google Scholar
    • Export Citation
  • Burkhead, J., 1956, Government Budgeting (New York: John Wiley and Sons).

  • Carlin, T., and J. Guthrie, 2000, “A Review of Australian and New Zealand Experiences with Accrual Output-Based Budgeting,” paper presented to a conference of the International Public Management Network, Macquarie Graduate School of Management, Sydney, Australia, March 4–6.

    • Search Google Scholar
    • Export Citation
  • Carter, N., R. Klein, and P. Day, 1992, How Organizations Measure Success (London: Routledge).

  • Caulfield, Janice, 2003, “New Public Management in a Developing Country: Creating Executive Agencies in Tanzania,” in Unbundled Government: A Critical Analysis of the Global Trend to Agencies, Quangos and Contractualisation,” ed. by Christopher Pollitt and Colin Talbot (London: Routledge).

    • Search Google Scholar
    • Export Citation
  • Chan, J.L., 1998, “The Bases of Accounting for Budgeting and Financial Reporting,” in Handbook of Government Budgeting, ed. by R.T. Meyers (San Francisco: Jossey-Bass), pp. 35780.

    • Search Google Scholar
    • Export Citation
  • Chan, J.L., 2003, “Government Accounting: An Assessment of Theory, Purposes and Standards,” Public Money and Management, Vol. 23, No. 1, pp. 1320.

    • Search Google Scholar
    • Export Citation
  • Chow, D., and C. Humphrey, 2003, “Whole of Government Accounting: An Aide to Performance, Management and Accountability?9th Biennial CIGAR (Comparative International Government Accounting Research) Conference, Bodo, Norway, June 13–14.

    • Search Google Scholar
    • Export Citation
  • Clinton, Bill, and Al Gore, 1997, The Blair House Papers: National Performance Review (Washington: Government Printing Office).

  • Cooper, R., and R.S. Kaplan, 1999, Design of Cost Management Systems (Upper Saddle River, New Jersey: Prentice Hall, 2nd ed.).

  • Cullen M., and T. Mallard, 2003, Pre-Introduction: Parliamentary Briefing on Public Finance (State Sector Management) Bill, August (Wellington).

    • Search Google Scholar
    • Export Citation
  • Davenport, L.W., 1996, “Internal Service Fund Functions: Should They Be Required to Compete with Private Vendors?Government Finance Review (October), pp. 1113.

    • Search Google Scholar
    • Export Citation
  • Davis, G., B. Sullivan, and A. Yeatman, eds., 1997, The New Contractualism? (South Melbourne: Macmillan).

  • Dean, P.N., 1989, Government Budgeting in Developing Countries (London: Routledge).

  • Demsetz, Harold, 1968, “The Cost of Transacting,” Quarterly Journal of Economics, Vol. 82, pp. 3353.

  • Diamond, Jack, 1990, “Measuring Efficiency in Government: Techniques and Experience,” Chapter 9 in Government Financial Management: Issues and Country Studies, ed. by A. Premchand (Washington: International Monetary Fund), pp. 14266.

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2001, “Performance Budgeting: Managing the Reform Process,” paper presented at UN Workshop on Financial Management and Accountability, Rome, November.

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002a, “The Micro Basis of Budget System Reform: The Case of Transitional Economies” (unpublished; Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002b, “The Role of Internal Audit in Government Financial Management: An International Perspective,” IMF Working Paper 02/94 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002c, “Performance Budgeting: Is Accrual Accounting Required?IMF Working Paper 02/240 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002d, “The New Russian Budget System: A Critical Assessment and Future Reform,” IMF Working Paper 02/21 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002e, “Budget System Reform in Transitional Economies: The Experience of Russia,” IMF Working Paper 02/22 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2002f, “The Strategy of Budget System Reform in Emerging Countries,” Public Finance and Management, Vol. 2, No. 3.

  • Diamond, Jack, 2003, From Program to Performance Budgeting: The Challenge for Emerging Countries, IMF Working Paper 03/169 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, 2004, “The Role of Internal Audit in Government Financial Management,” in Accounting and Accountability in Emerging and Transitional Economies 6, ed. by Trevor Hopper and Zahirul Hoque (New York: Elsevier), pp. 5580.

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, and P. Khemani, 2005, “Introducing Financial Management Information Systems in Developing Countries,” IMF Working Paper 05/196 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Diamond, Jack, and B.H. Potter, 2000, Setting Up Treasuries in the Baltics, Russia, and Other Countries of the Former Soviet Union, IMF Occasional Paper No. 198 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Dixon, G., 2002, “Thailand’s Hurdle Approach to Budget Reform,” Poverty Reduction and Economic Management (PREM) Note 73 (Washington: World Bank).

    • Search Google Scholar
    • Export Citation
  • Domberger, S., 1998, The Contracting Organization: A Strategic Guide to Outsourcing (New York: Oxford University Press).

  • Domberger, S., and P. Jensen, 1997, “Contracting Out by the Public Sector: Theory, Evidence, Prospects,” Oxford Review of Economic Policy, Vol. 13, pp. 6778.

    • Search Google Scholar
    • Export Citation
  • Domberger, S., and S. Rimmer, 1994, “Competitive Tendering and Contracting Out in the Public Sector: A Survey,” International Journal of Economics and Business, Vol. 1, No. 3, pp. 43953.

    • Search Google Scholar
    • Export Citation
  • Estela, M., 2000, “Strengthening the Integrity of a Tax Collection Agency: The Case of SUNAT in Peru,” paper prepared for a World Bank–Inter-American Development Bank seminar on “Radical Solutions for Fighting Corruption in the Public Sector,” November 2–3, Washington, D.C.

    • Search Google Scholar
    • Export Citation
  • European Commission, 1996, “Public Procurement in the European Union: Exploring the Way Forward,” Green Paper COM(96) 583 (Brussels).

    • Search Google Scholar
    • Export Citation
  • European Commission, 2004, “Public Finances in the European Monetary Union 2004” (Brussels).

  • European Federation of Accountants, 2003, “The Adoption of Accrual Accounting and Budgeting by Governments (Central, Federal, Regional and Local),” FEE Discussion Paper (Brussels). See: www.fee.be/.

    • Search Google Scholar
    • Export Citation
  • Fantone, Denise M., 2004, “Performance-Based Budgeting in the U.S.: A Congressional Perspective,” paper presented to Working Party of Senior Budget Officials (SBO), Organization for Economic Cooperation and Development, Paris, April 1–2.

    • Search Google Scholar
    • Export Citation
  • France, Ministère de l’Economie, des Finances et de l’Industrie, 2001, Public Finance Reform, No. 1 (Paris).

  • Garamfalvi, L., and W. Allan, 1994, “Value for Money Auditing, Evaluation, and Public Expenditure Management: Experience in Selected OECD Countries and Lessons for Italy” (unpublished; Washington: International Monetary Fund, Fiscal Affairs Department).

    • Search Google Scholar
    • Export Citation
  • Gearhart, J., 1999, “Activity-Based Management and Performance Measurement Systems,” Government Finance Review, pp. 1316.

  • Gill, D., 2002, “Signposting the Zoo—From Agencification to a More Principled Choice of Government Organisational Form,” OECD Journal on Budgeting, Vol. 2, No. 1, pp. 2779.

    • Search Google Scholar
    • Export Citation
  • Glyn, D., B. Sullivan, and A. Yeatman, eds., 1997, The New Contractualism (South Melbourne: MacMillan).

  • Gray, A., W.I. Jenkins, and B. Segsworth, 1993, Budgeting, Auditing and Evaluation: Functions and Integration in Seven Governments (New Brunswick, New Jersey: Transaction Publishers).

    • Search Google Scholar
    • Export Citation
  • Grindle, M.S., 1997, “Divergent Cultures? When Public Organizations Perform Well in Developing Countries,” World Development, Vol. 25, No. 4, pp. 48195.

    • Search Google Scholar
    • Export Citation
  • Groszyk, W., 2001, “Outcome-Focused Management in the United States,” OECD Journal on Budgeting, Vol.1, No. 4, pp. 12950.

  • Guthrie, J., 1998, “Application of Accrual Accounting in the Public Sector: Rhetoric or Reality?Financial Accountability and Management, Vol. 14, No. 1, pp. 119.

    • Search Google Scholar
    • Export Citation
  • Guthrie, J., O. Olson, and C. Humphrey, 1999, “Debating Developments in New Public Financial Management: The Limits of Global Theorising and Some New Ways Forward,” Financial Accountability and Management, Vol. 15, No. 3/4, pp. 20928.

    • Search Google Scholar
    • Export Citation
  • Harper, M., J. Arroyo, T. Bhattacharya, and T. Bulman, 2000, Public Services through Private Enterprise: Micro-Privatisation for Improved Delivery (London: Intermediate Technology Publications).

    • Search Google Scholar
    • Export Citation
  • Hatry, H.P., 1999, Performance Measurement (Washington: Urban Institute Press).

  • Heald, D., and A. Dowdall, 1999, “Capital Charging as a VFM Tool in Public Services,” Financial Accountability and Management, Vol. 15, No. 3/4, pp. 20928.

    • Search Google Scholar
    • Export Citation
  • Henderson, Simon, 2004, “The Challenges of Measuring Performance,” paper presented at “Performance Information in the Budget Process,” OECD/PUMA Senior Budget Officials (SBO) meeting, Organization for Economic Cooperation and Development, Paris, April 1–2.

    • Search Google Scholar
    • Export Citation
  • Henke, E.O., 1992, Introduction to Nonprofit Organization Accounting (Cincinnati, Ohio: South-Western Publishing Co.)

  • H.M. Treasury, 2004, The U.K. Government’s Public Service Agreement Framework, Better Public Services Team (London).

  • Hill, Alex, 2004, “The U.K. Government’s Public Service Agreement Framework” (unpublished; London: H.M. Treasury, Better Public Services Team).

    • Search Google Scholar
    • Export Citation
  • Hodge, G.A., 2000, Privatization: An International Review of Performance (Boulder, Colorado: Westview Press).

  • Hood, C., 1991, “A Public Management for All Seasons?Public Administration, Vol. 69, No. 1, pp. 319.

  • Hood, C., 1995, “The New Public Management in the 1980s: Variations on a Theme,” Accounting, Organization, and Society, Vol. 20, No. 2/3, pp. 93109.

    • Search Google Scholar
    • Export Citation
  • Hyndman, N.S., and R. Anderson, 1995, “The Use of Performance Information in External Reporting: An Empirical Study of U.K. Executive Agencies,” Financial Accountability and Management, Vol. 11, No. 1, pp. 117.

    • Search Google Scholar
    • Export Citation
  • Institute of Internal Auditors (IIA), 1999, Internal Audit Standards (Altamonte Springs, Florida: IIA).

  • Institute of Internal Auditors (IIA), 2001, “Consulting Implementation Standards,” Standards for the Professional Practice of Internal Auditing (Altamonte Springs, Florida: IIA, Internal Auditing Standards Board).

    • Search Google Scholar
    • Export Citation
  • International Federation of Accountants (IFAC), 1998, Guidelines for Governmental Financial Reporting (New York: IFAC, Public Sector Committee).

    • Search Google Scholar
    • Export Citation
  • International Federation of Accountants (IFAC), 2003a, Cash Basis International Public Sector Accounting Standards: Financial Reporting Under the Cash Basis of Accounting (New York: IFAC).

    • Search Google Scholar
    • Export Citation
  • International Federation of Accountants (IFAC), 2003b, Transition to the Accrual Basis of Accounting: Guidance for Governments and Government Entities, Study 14 (New York: IFAC, 2nd ed.).

    • Search Google Scholar
    • Export Citation
  • International Monetary Fund, 2001a, Government Finance Statistics Manual 2001 (Washington).

  • International Monetary Fund, 2001b, Manual on Fiscal Transparency (Washington).

  • International Monetary Fund, 2001c, World Economic Outlook, October (Washington).

  • International Organization of Supreme Audit Institutions, 1995, “Auditing Standards,” adopted by Auditing Standards Committee at the XVth INTOSAI Congress, Cairo (Stockholm: INTOSAI). Available at: http://asc.rigsrevisionen.dk/composite-14.htm

    • Search Google Scholar
    • Export Citation
  • Jackson, P., and B. Palmer, 1989, First Steps in Measuring Performance in the Public Sector (London: Public Finance Foundation).

  • Jones, L.R., and J.L. McCaffery, 1997, “Implementing the Chief Financial Offices Act and the Government Performance and Results Act in the Federal Government,” Public Budgeting & Finance, Vol. 13, No. 1, pp. 3555.

    • Search Google Scholar
    • Export Citation
  • Joyce, P.G., 1993, “Using Performance Measures for Federal Budgeting: Proposals and Prospects,” Public Budgeting & Finance, Vol. 13, No. 4, pp. 115.

    • Search Google Scholar
    • Export Citation
  • Kaplan, R.S., and D.P. Norton, 1996, The Balanced Scorecard: Translating Strategy into Action (Boston: Harvard Business School Press).

  • Kaul, M., 1997, “New Public Administration, The Management Innovations in Government,” Public Administration and Development, Vol. 17, No. 1, pp. 1326.

    • Search Google Scholar
    • Export Citation
  • Kerf, M., R.D. Gary, T. Irwin, C. Levesque, R. Taylor, and M. Klein, 1999, “Concessions for Infrastructure: A Guide to Their Design and Award,” World Bank Technical Paper 399 (Washington: World Bank).

    • Search Google Scholar
    • Export Citation
  • Klein, M., So, J., and Shin, B., 1996, “Transaction Costs in Private Infrastructure Projects—Are They Too High? Viewpoint, No. 95 (Washington: World Bank, Private Sector Development Department).

    • Search Google Scholar
    • Export Citation
  • Kopits, G., 2001, “Fiscal Rules: Useful Policy Framework or Unnecessary Ornament,” IMF Working Paper 01/145 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Kopits, G., and J.D. Craig, 1998, Transparency in Government Operations, IMF Occasional Paper No. 158 (Washington: International Monetary Fund).

    • Search Google Scholar
    • Export Citation
  • Kopits, G., and S. Symansky, 1998, Fiscal Policy Rules, IMF Occasional Paper No. 162 (Washington: International Monetary Fund).

  • Kotter, John P., 1995, “Leading Change: Why Transformation Efforts Fail,” Harvard Business Review, Vol. 73, No. 2, pp. 5967.

  • Kotter, John P., 1996, Leading Change (Boston: Harvard Business School Press).

  • Kotter, John P., and J.L. Heskett, 1992, Corporate Culture and Performance (New York: Free Press).

  • Kristensen, J.K., 2002, “Overview of Results Focused Management and Budgeting in OECD Member Countries,” paper presented at an expert meeting held by the Public Management Committee at Organization for Economic Cooperation and Development, Paris, February 11–12.

    • Search Google Scholar
    • Export Citation
  • Kristensen, J.K., W. Groszyk, and B. Bühler, 2002, “Outcome-Focused Management and Budgeting,” OECD Journal on Budgeting, Vol. 1, No. 4, pp. 129.

    • Search Google Scholar
    • Export Citation
  • Laking, R., 1996, “Public Management in the OECD: Some Questions for Developing Countries,” paper presented to the World Bank, Washington, D.C., May (unpublished; Victoria: University of Wellington).

    • Search Google Scholar
    • Export Citation
  • Laking, R., 2001, “Governance of the Wider State Sector: Principles for Control and Accountability of Delegated and Devolved Public Bodies,” paper presented at the Organization for Economic Cooperation and Development, Public Management Service (PUMA) Conference on “Devolving and Delegating Power to More Autonomous Public Bodies and Controlling Them: The Governance of Public Agencies and Authorities,” Bratislava, November. Available at: http://www.oecd.org/dataoecd/8/19/2730680.pdf

    • Search Google Scholar
    • Export Citation
  • Laking, R., 2005, “Agencies: Their Benefits and Risks,” OECD Journal on Budgeting, Vol. 4, No. 4, pp. 725.

  • Lapsley, I., 1999, “Accounting and the New Public Management: Instruments of Substantive Efficiency or a Rationalising Modernity?Financial Accountability and Management, Vol. 15, No. 3/4, pp. 201207.

    • Search Google Scholar
    • Export Citation
  • Larbi, G.A., 1998, “Institutional Constraints and Capacity Issues in Decentralizing Management in Public Services: The Case of Health in Ghana,” Journal of International Development, Vol. 10, No. 3, pp. 37786.

    • Search Google Scholar
    • Export Citation
  • Larsson, K., and J.S. Madsen, 1999, “Protecting the Financial Interests of the State and of the European Union,” Public Management Forum, Vol. 5, No. 6 (Paris: OECD), pp. 417.

    • Search Google Scholar
    • Export Citation
  • Leibenstein, H., 1966, “Allocative Efficiency vs. X-Efficiency,” American Economic Review, Vol. 56 (June), pp. 392415.

  • Lienert, Ian, and Moo-Kyung Jung, 2005, “The Legal Framework for Budget Systems, and International comparison,” OECD Journal on Budgeting, Special Issue, Vol. 4, No. 3.

    • Search Google Scholar
    • Export Citation
  • Likierman, A., 2000, “Changes to Managerial Decision-Taking in UK Central Government,” Management Accounting Research, Vol. 11, No. 2, pp. 25361.

    • Search Google Scholar
    • Export Citation
  • Maholland, L., and P. Muetz, 2002, “A Balanced Scorecard Approach to Performance Measurement,” Government Finance Review (April), pp. 1215.

    • Search Google Scholar
    • Export Citation
  • Martinez-Vazquez, J., and R. McNab, 2000, “The Tax Reform Experiment in Transitional Countries,” National Tax Journal, Vol. 53, No. 2 (June), pp. 27398.

    • Search Google Scholar
    • Export Citation
  • Mellor, T., 1996, “Why Governments Should Produce Better Balance Sheets,” Australian Journal of Public Administration, Vol. 55, No. 1, pp. 7881.

    • Search Google Scholar
    • Export Citation
  • Mikesell, J., 2003, Fiscal Administration: Analysis and Applications for the Public Sector (Belmont, California: Thomson/Wadsworth, 6th ed.).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1994, “Performance Management in Government: Performance Measurement and Results-Oriented Management,” Public Management Service (PUMA) Occasional Paper No. 3 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1995, Budgeting for Results: Perspectives on Public Expenditure Management (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1996, “Managing Structural Deficit Reduction,” Public Management Service (PUMA) Occasional Paper No. 11 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1997a, “Benchmarking, Evaluation and Strategic Management in the Public Sector,” paper presented at the 1996 Meeting of the Performance Management Network of the OECD’s Public Management Service (PUMA) (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1997b, “Best Practice Guidelines for Contracting out Government Services,” Public Management Service (PUMA) Occasional Paper No. 20 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1999a, Performance Contracting: Lessons from Performance Contracting Case Studies (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 1999b, “Managing Accountability in Intergovernmental Relationships,” Public Management Service, PUMA/RD (99) 4 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 2000, The OECD Outputs Manual, PUMA/SBO (2000)7 (Paris).

  • Organization for Economic Cooperation and Development (OECD), 2001, “Financial Management and Control of Public Agencies,” SIGMA Paper No. 32 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 2002a, “Accrual Accounting and Budgeting: Key Issues and Recent Developments, Twenty-Third Annual Meetings of OECD Senior Budget Officials, Washington, D.C., June 3–4,” Public Management Service, PUMA/SBO(2002)10 (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 2002b, Distributed Public Governance: Agencies, Authorities, and Other Government Bodies (Paris).

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 2002c, Governing for Performance in the Public Sector, OECD/Germany High-Level Symposium, March 13–14, Berlin (Paris). Country reports available at: http://www.oecd.org/document/43/0,2340,en_2649_37457_1813355_1_1_1_37457,00.html

    • Search Google Scholar
    • Export Citation
  • Organization for Economic Cooperation and Development (OECD), 2003, OECD/World Bank Budget Practices and Procedures Survey (Paris).

  • Organization for Economic Cooperation and Development (OECD), 2004, “Public Sector Modernisation: Governing for Performance,” Policy Brief (Paris). Available at: