2 Informing Performance Budgeting

Marc Robinson
Published Date:
October 2007
  • ShareShare
Show Summary Details
Marc Robinson1

Part One of this volume focuses upon performance information. “Performance information” has been defined in this volume as referring to information first on the results achieved by public expenditure, and second on the costs of achieving those results. Since performance budgeting is about the use of performance information in budgeting/funding decisions, the development of the right type of performance information is clearly a crucial prerequisite for its success. The first key objective of Part One is, accordingly, to identify the type of performance information required to underpin performance budgeting systems, and the criteria and principles which should govern the selection and development of that information. A further objective is to delimit the potential value of performance information for budgeting purposes. This is one important element in establishing realistic expectations about the efficacy of performance budgeting.

There are a number of overarching considerations relevant to performance information strategy for performance budgeting. The literature has articulated a number of well-developed, and largely overlapping, sets of criteria for the development of performance information (see HM Treasury et al., 2001; Morley et al., 2001; Poister, 2003). Two criteria deserve special emphasis here. One is relevance, which means that the choice of performance information should be guided by explicitly-identified uses. In the present context, it is relevance to budgetary decisions and processes which is of concern. Different approaches to performance budgeting have, as noted in Chapter 1, different information requirements. For example, on the cost side, program budgeting requires a capacity to measure the costs of programs, whereas performance budgeting systems which aim to more tightly link results and funding have correspondingly more demanding cost information requirements (unit costs, comparative cost information, and so on). Performance information strategy should therefore be developed in light of the type of performance budgeting mechanisms which it is desired to introduce.

The other particularly relevant criterion is cost-effectiveness. In the textbook world of traditional public finance theory, allocative efficiency is achieved by equalizing the ratio of marginal social benefit/marginal social cost of each individual type of output produced. Information is assumed in this context to be perfect and costless. The real world is, of course, fundamentally different. Performance information is not free, and the marginal cost of additional performance information can be substantial, both in terms of financial cost and in terms of the use of scarce human capital. Performance measurement is no different from any other activity—it should not be pursued beyond the point where its marginal cost equals its marginal benefit. This is true even in the most affluent countries, but is particularly true in developing countries where financial resources and skilled personnel are in much shorter supply and therefore have a higher opportunity cost.

Even if money were no object, performance information would nevertheless remain imperfect—and in many cases highly imperfect—because of intrinsic measurement difficulties. There are, for example, a set of fundamental problems which often arise in seeking to measure outcomes, and even the measurement of outputs is problematic in certain respects. Particularly important from the point of view of expenditure prioritization is the absence of a metric for comparing the social utility of expenditures with very different objectives (for example, comparing a defense program with a health program)—marginal social utility being, regrettably, largely unmeasurable.

Not only is information costly and necessarily imperfect, but it is held asymmetrically. Some players have more information (or, expressed differently, are less badly informed) than others. In particular, “line” agencies will in general be somewhat better informed about their own operations—including the effectiveness of their programs, and the relationship between costs and results—than central decision-makers, including the Ministry of Finance. Information is a valuable commodity, and will not be readily shared, particularly if the center may use it to what the agency perceives as its detriment. In this context, the response to central demands for information may be strategic game-playing. This is, for example, said to have been what happened when agencies were invited, as part of the ZBB-related marginal prioritization performance budgeting mechanisms outlined in Chapter 1, to identify and rank options at the margin for program expenditure reductions. Agencies in many cases responded by identifying programs which they knew would never be cut because they were regarded as too important, for political or other reasons, by the center. By contrast, when in some other countries the agencies were offered the incentive that any savings they identified might be applied to increasing their spending elsewhere, the response was very different.

In terms of information strategy, one of the issues which this raises is the extent to which the center—the political leadership of executive government, the finance ministry, and other relevant central agencies—must itself be involved in defining what performance information is to be gathered by line agencies, and perhaps also in directly gathering that information.

All of this suggests it is essential that a highly selective approach is adopted to choosing what performance information will be gathered to support performance budgeting. The importance of selectiveness is further reinforced by so-called “bounded rationality.” Bounded rationality refers to the limits on the human capacity to process information in order to take it into account in decision-making. If performance information were costless and complete, much of it would nevertheless remain unused because of bounded rationality. The importance of bounded rationality has been borne out time and time again in the history of performance budgeting. Thus, for example, the US PPBS system in the 1960s generated a massive quantity of analytic paperwork—particularly program evaluations—much of which did not influence (and was not even looked at by) decision-makers precisely because there was too much of it for busy people to absorb and process.

The other implication is that the cost-effectiveness of performance information must be an important consideration in deciding in what manner to seek to link results and funding—that is, what form of performance budgeting to adopt. It is, in particular, important to ask to what extent and under what circumstances the substantially higher information costs of more “sophisticated” performance budgeting systems are justifiable in terms of likely benefits. In this context, it is important for there to be a clear understanding of the nature and order of magnitude of cost of the requisite information. It is also necessary to consider under what circumstances information on the cost of delivering results will have a sufficiently strong predictive value so as to be able to serve as the basis for linking results and funding.

Against this background, Chapter 3 focuses upon results information. The literature on methodology for measuring and evaluating the results achieved by public services is quite extensive, and there is no need to duplicate its contents here. It is for these reasons that this chapter focuses primarily on three areas—the clarification of key performance concepts (outcomes, outputs, and so on) which are fundamental to the discussion of performance budgeting; the limits of results information; and the identification of a number of issues in the measurement of outcomes and outputs which are of particular relevance to efforts to link results to funding.

Chapter 4 focuses on the costing of results. The literature in this area is somewhat less well-developed than that on the measurement of results. In particular, there are a set of issues in this area which have considerable practical relevance for performance budgeting and which are the subject of divergent practice and opinion (for example, the relationship of accrual accounting to performance budgeting). There are also certain other costing issues highly relevant to performance budgeting which require a somewhat more analytical treatment than is available in the present performance budgeting literature. These are addressed in that chapter.

Chapter 5 focuses upon the specific informational requirements of the most basic form of performance budgeting—namely, program budgeting. While the informational requirements of program budgeting on the costing side may not be as great as those of systems which attempt to link funding and results more tightly, this does not mean that these informational requirements are in any sense trivial. It requires considerable resources and effort to systematically gather useful information on program effectiveness. There is, moreover, the crucial threshold requirement that programs be defined in a way which serves the purpose of program budgeting—which is to facilitate better allocative decision-making. Programs must, in other words, be defined in a way which is relevant to the priority-setting challenges which policy-makers deem to be of the highest social significance. Too often, program structures have been designed without sufficient regard to their purpose. However, even when the allocative relevance of programs serves—as it should do—as the starting point for the design of a program classification system, a set of important issues and trade-offs arise. What, for example, should be the relationship between program structure and organizational structure? If programs are supposed to be defined in terms of relevance to allocative choices, does that mean that overhead cost (“corporate services”) type programs should be avoided? Should expenditure be appropriated by programs, or should programs be used as a planning rather than appropriation tool? These and a number of other issues arise time after time, yet have not always been subject to systematic analysis.

The final chapter in this part turns to the question of performance auditing. This is an area in which supreme audit authorities in many countries have become increasingly involved over recent decades. As David Shand explains in Chapter 6, there are a range of approaches taken to performance auditing. Shand reviews in detail the approaches taken to performance auditing in nine countries. In some of these countries, performance auditing focuses on reviewing the efficiency and effectiveness of programs, and therefore amounts to a form of program evaluation which can potentially provide a significant direct information input into budget decision-making. In other countries, the focus of performance auditing is primarily on the verification of performance indicators—although often not by attesting individual indicators, but by assessing the systems which generate the data. The independent status of supreme audit authorities makes their role in such verification of performance information a particularly valuable contribution to the broader “managing-for-results” enterprise, and in particular to the soundness of the information base for performance budgeting.

The focus of Part One, it should be stressed, is not upon performance information strategy in general, but rather upon the more specific information requirements of budgeting. When a government or a specific agency embarks upon the development of a performance information strategy, it should of course be guided not only by what will be of value in budget processes, but by a broader set of performance management and accountability purposes. Thus, for example, there will be a need for information on individual or internal work unit performance for human resources and internal management. These are not, however, of concern here.

Nevertheless, it is important to emphasize that developing performance information takes time. It is not uncommon to find countries that wish to introduce performance budgeting (and managing-for-results more generally) drawing up implementation plans which envisage the development of performance measures within a one- or two-year timeframe. All experience in countries which have to date developed successful performance measurement systems indicates that this is completely unrealistic.

Equally, however, it should not be assumed that performance information needs to be perfect before it can be put to work to improve budgeting, and to help in other areas of public management. Particularly because of an awareness of the risks of “perverse effects” arising from the use of flawed performance measures (discussed in Chapter 18), there are many who believe that measures should be developed over several years before being put to use. However, experience has indicated that even quite imperfect measures may be used to considerable benefit—while, of course, still being improved over time. A striking example of this is the use of very imperfect output measures and output costing in the early days of the diagnosis related group (output)-based payment system in the hospital sector, where “even with seemingly inadequate databases,” the system had “profound [positive] effects” (Coffey, 1999).


The quthor would like to thank Peter C. Smith for his valuable comments on an earlier draft of this chapter


    CoffeyR.M.1999Casemix Information in the United States: Fifteen Years of Management and Clinical Experience,Casemix Quarterly Vol. 1(1).

    HMTreasuryet al.2001Choosing the Right Fabric: A Framework for Performance Information (London: HM Treasury).

    MorleyE.S.P.Bryant and H.P.Hatry2001Comparative Performance Information (Washington: Urban Institute).

    PoisterT.H.2003Measuring Performance in Public and Non-Profit Organizations (San Francisco: Jossey-Bass).

    Other Resources Citing This Publication