10 US Program Assessment Rating Tool

Marc Robinson
Published Date:
October 2007
  • ShareShare
Show Summary Details
Denise M. Fantone1

For over 50 years the US federal government has attempted to better align resource decisions and expected performance. Although several government-wide reforms contributed to the evolving concept of performance budgeting in the United States, all fell short of linking performance to budget decision-making. Building on the strategic planning, reporting, and measurement processes of the Government Performance and Results Act (GPRA), the current administration believes its Program Assessment Rating Tool (PART) succeeds where other reforms failed. At the time of writing, the Office of Management and Budget (OMB) was in its final year of a five-year effort to use PART to rate the program design, strategic planning, management, and results of all US federal programs as a part of its executive branch budget formulation process. This chapter draws on General Accounting Office (GAO) reviews2 that described PART’S development and use, and assessed its strengths and weaknesses as a budget and evaluation tool.


The US federal government is built on a system of checks and balances that makes shared power and responsibility necessary. In practice, both the legislative branch (the Congress) and the President oversee and manage agencies and their missions. At times, this leads to significant differences in policy direction and complicates how, for what, and to whom agencies are held accountable. In allocating resources, the Congress plays a pivotal role stemming from the Constitution, which states: “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law…”3 Although the Congress exercises power through a many-layered system of committees, the Appropriations Committees are at the heart of the Congressional budget process. They control the budget structure and make annual funding decisions for most of the activities—other than entitlements4—that are carried out by the US government or third parties acting in the federal government’s behalf. Since the President does not have the final say on funding or its purpose, it may appear that he is disadvantaged. However, the President has an opportunity to propose overall fiscal policy targets and specific funding levels for programs and activities through his budget to Congress, which serves as the starting point for Congressional deliberations. Moreover, even though Congressional committees in their various roles continue to oversee agencies throughout the year, presidential influence increases once spending bills are enacted and executive branch agencies implement their programs (Fantone and Posner, 2003).

The Office of Management and Budget, located in the Executive Office of the President, is responsible primarily for the annual preparation of the President’s budget request. To the extent that it is possible with a staff of just over 500, OMB also oversees the management of executive branch agencies. As the President’s budget office, OMB provides guidance on the content and presentation of agency budget requests; develops policy options and funding recommendations; approves agency spending plans, and drafts legislation in support of presidential initiatives. In addition to monitoring the efficacy of agencies’ operations, OMB promulgates government-wide guidance and regulations for federal organizations primarily in the areas of financial management, information technology, performance planning, regulatory program activities, and federal acquisitions.

OMB periodically is criticized for not paying sufficient attention to management issues because of its preoccupation with budgetary matters. But those familiar with OMB claim that agency management and program performance have always been examined along with budget requests. In contrast, OMB’s efforts have been inconsistent, and some believe neglectful, in encouraging implementation of government-wide management reforms and oversight of day-to-day agency operations. This has been attributed to limited staff resources, few tools (beyond the budget) to motivate performance improvements, and the changing management priorities of each administration (Tanaka et al., 2003).

Past performance budgeting initiatives

Recent efforts in the US to improve the linkage of performance information and budgeting in the federal government are part of a continuum over the past 50 years.5 Since World War II and prior to the current Government Performance and Results Act of 1993, there have been four government-wide initiatives aimed at linking performance with the federal budget process.

The Hoover Commission of 1949 was established, in part, to identify ways to reduce war debt and downsize the post-war government. At a time when the federal government primarily provided goods and services directly, it was not surprising that the Commission recommended a shift in the focus of budget decision-making from inputs to measuring workload and efficiency of government functions and activities. Although it did not mention performance budgeting directly. Congress enacted the Budget and Accounting Procedures Act of 1950 (BAPA), which required the President to present performance information, primarily on “workload” (output or activity in modern terminology) and unit cost.

The next government-wide reform came in the mid-1960s when the Department of Defense’s Planning, Programming, and Budgeting System (PPBS) was extended to civilian agencies. PPBS introduced a framework for budget formulation that allowed comparisons to be made among policy choices and encouraged multiple strategies for achieving objectives. Multi-year planning based on an agency’s program structure offered the possibility of linking outputs to long-term objectives. Systems analysis and measurement were seen as essential elements for understanding the costs and benefits of government services. Confident that management techniques could modernize the federal government, President Johnson predicted that PPBS

will help us find new ways to do jobs faster, to do jobs better, and to do jobs less expensively. It will insure a much sounder judgment through more accurate information, pinpointing those things that we ought to do more, spotlighting those things that we ought to do less. It will make our decision-making process as up-to-date, I think, as our space-exploring program.

The third and fourth reforms borrowed ideas from the private sector and state government. President Nixon introduced Management by Objectives (MBO) in 1973 as an approach that added the element of holding agency managers responsible for agreed-upon outputs and sought to define performance in terms that today we would call “outcomes.” The intent of this initiative was to centralize goal-setting decisions while at the same time allowing managers to choose how to achieve the goals. MBO continued only one year after President Nixon resigned and was followed in 1977 by the Carter administration’s zero-base budgeting (ZBB), a process transplanted from the state of Georgia to the federal government. ZBB sought to optimize accomplishments by setting priorities based on program results that could be achieved at alternative spending levels. In developing budgets, these alternatives were to be ranked sequentially from the lowest level organization up through the department in an attempt to move away from traditional incremental budgeting and towards an annual reconsideration of how all resources would be used.

Although there is general agreement that all four of these efforts fell short of their stated goals, each has contributed to some aspect of the evolution of performance budgeting in the United States and their influences still can be seen today (GAO, 1998a).

Government Performance and Results Act

The most recent statutory reform, the Government Performance and Results Act of 1993,6 was designed to avoid a number of weaknesses identified in earlier reforms. First, the authors of the legislation hoped to strengthen the credibility and use of GPRA plans and reports by requiring consultation between the executive and legislative branches on overall agency goals and missions. Second, by linking planning and goal-setting to the program activity structures in agency budget requests, they wanted to increase the possibility that this information could be used directly in budget deliberations. Third, the backers of this legislation recognized that it would take time for the federal government to build capacity and create a receptive environment. In response, they phased the development of the GPRA strategic planning and reporting process, did not prescribe how plans would be organized as long as they were comprehensive, and, while they preferred outcome measures, accepted a range of measures initially.

GPRA was intended to address several broad purposes, including strengthening the confidence of the American people in their government; improving federal program effectiveness, accountability, and service delivery; and enhancing Congressional decision-making by providing more objective information on program performance. These objectives were to be achieved through three key elements:

  1. A multi-year strategic plan for every executive agency—covering five years and updated every three, the plan must include an agency mission statement and long-term general goals and objectives, describe strategies for achieving these goals, and explain the key external factors that could affect the achievement of goals.

  2. An annual performance plan—covering each program activity presented in agency budget requests,7 the plan aligns annual goals with long-term strategic goals. Agencies are required to have performance measures to gauge progress towards goals and to explain how the resulting performance data will be verified.

  3. An annual performance report—an agency review and discussion of its performance for the prior year compared with the performance goals established in its annual performance plan. Agencies are expected to explain any differences and to provide baseline and trend data to help ensure that their performance is reviewed in context (GAO, 1997).

Ten years after enactment, GPRA’s achievement record is mixed. Although the committee report accompanying the Act suggested that developing the capacity to relate the level of program activity with program costs, such as costs per unit of result, cost per unit of service, or cost per unit of output, should be a high priority, it was not a requirement of GPRA. In a 2004 survey of federal managers, GAO found that only 31 percent of those surveyed reported having such measures to a great or very great extent (GAO, 2004a). Among the issues that managers identified were difficulties in distinguishing between the results produced by the federal program and results caused by external factors or non-federal actors, such as with grants programs. Timely and useful performance information was not always available to federal agencies, making it more difficult to assess and report on progress achieved. Those who expected that GPRA would be used to determine which programs should or should not be funded, or who viewed performance budgeting as a contractual arrangement, that is, a certain level of funding for an agreed-upon level of performance, were bound to be disappointed with the progress made under GPRA.

A factor in the development of the Program Assessment Rating Tool was the current administration’s frustration with the inability to use GPRA information meaningfully in the budget process. Former OMB Director Mitch Daniels testified that “agencies spend an inordinate amount of time preparing reports to comply with it [GPRA] producing volumes of information of questionable value” (Daniels, 2002). Still, the current administration owes much to GPRA for providing the planning and performance reporting framework, and metrics on which to base and build its management initiatives.8

Program Assessment Rating Tool9

In outlining his management priorities shortly after taking office, President Bush put federal agencies on notice that

[u]nderperforming agencies are sometimes given incentives to improve, but rarely face consequences for persistent failure. This all-carrot-no-stick approach is unlikely to elicit improvement from troubled organizations. Instead, we should identify mismanaged, wasteful or duplicative government programs, with an eye to cutting their funding, redesigning them, or eliminating them altogether. (OMB, 2001)

To make good on this strategy, OMB developed PART, a 25-item questionnaire to help assess a program’s strengths and weaknesses. Designed to be evidence-based, PART does not create new information, but instead is expected to draw on a variety of sources, including authorizing legislation; strategic plans, annual plans, and performance reports required by the Government Performance and Results Act (GPRA); financial statements; independent program evaluations; and inspector general and GAO reports. Once a PART assessment is completed, the program receives one of four overall ratings: effective, moderately effective, adequate, or ineffective. A fifth rating, results not demonstrated, may be given—independent of a program’s numerical score—if OMB decides that performance information, performance measures, or both are insufficient or inadequate. This last category was added to make a distinction between unknown effectiveness and evidence that a program was performing poorly.

Divided into four sections each with its own weight,10 the PART questions address: (1) program purpose and design (20 percent), (2) strategic planning (10 percent), (3) program management (20 percent), and (4) program results (50 percent). Several additional questions are asked depending on which of seven program types best describes how services are delivered.11 For federal programs that do not fit easily into any one category, OMB allows a “mixed” PART questionnaire, provided that the agency and OMB agree on which additional questions are relevant. Table 10.1 provides examples of the types of questions asked in each of the four sections.

Table 10.1.Selected PART questions
Program Purpose and DesignIs the program design effectively targeted, so that resources will reach intended beneficiaries and/or otherwise address the program’s purpose directly?
Strategic PlanningDoes the program have a limited number of specific long-term performance measures that focus on outcomes and meaningfully reflect the program purpose?
Program ManagementAre federal managers and program partners (including grantees, sub-grantees, contractors, cost-sharing partners, and government partners) held accountable for cost, schedule, and performance results?
Program Results/AccountabilityDo independent evaluations of sufficient scope and quality indicate that the program is effective and achieving results?
Source: US Office of Management and Budget.
Source: US Office of Management and Budget.

The first three sections use a yes/no format and the last uses a four-point scale—yes/large extent/small extent/no—that allows examiners to give agencies credit for progress in achieving goals. The results of the assessment, including explanations for each response, supporting evidence, and recommended follow-on actions, are publicly available on OMB’s website.12Figure 10.1 provides an example from the worksheet for the US Department of Labor’s Job Corps program for questions 4.3 and 4.4 (Section 4: Program Results, questions 3 and 4). This program provides education and training services to disadvantaged youth.

Figure 10.1.Example of a worksheet for the Job Corps program

Source: US Office of Management and Budget.

Although this detail is available online, the PART results are primarily communicated through summary sheets in the budget.13 A PART summary sheet displays the program’s key performance measures, three years of budget information, significant findings, and follow-on actions in addition to the overall ratings and the individual numerical section scores. For those approximately 20 percent of programs that have been reassessed, the summary sheet also shows the year the program was last assessed and the status of follow-on actions. (Status markers include “no action taken,” “action taken but not completed,” and “completed.”) Figure 10.2 illustrates the PART summary for the Job Corps program, which was assessed for the first time in 2004.

Figure 10.2.Summary assessment for the Job Corps program

Source: US Office of Management and Budget.

The release of the President’s fiscal year 2007 budget in February marked the fourth year of a five-year effort by the US to formally assess all federal programs at least once in its executive branch budget formulation process. OMB’s plan to review approximately 20 percent of programs—or an equivalent percentage of the budget—annually has resulted in assessments of nearly 800 federal programs to date. With the exception of some support functions for programs assessed separately in earlier years and—according to OMB—some defense programs, OMB is on schedule to meet its goal.

Developing the Program Assessment Rating Tool

The President’s fiscal year 2004 budget request was the first time that the PART questionnaire was used, but it was the second year that OMB had assessed programs and presented the results in the President’s budget proposal. In its initial effort, OMB rated selected programs either as effective or ineffective and provided a brief narrative explanation for the rating. However, because there was no explanation for either the methodology or sources used to reach conclusions, reactions generally were negative.

Responding to this criticism, OMB created PART the following year and tested the instrument on 67 programs using a couple of approaches to train its program examiners (OMB, 2002a). First, OMB developed guidance that explained the purpose of each question and described the evidence required to support a yes or no answer. Although not expected to cover every situation, the instructions established general standards for PART assessments. Second, OMB tasked its Performance Evaluation Team (PET)—the group of examiners who developed the PART—to continue as advisors during the first round of PART reviews.14 This had the two-fold benefit of providing a source for examiners to ask questions and giving PET members an understanding of where the questionnaire or process needed to be clarified or revised.

In addition, OMB sought input from external groups (including GAO) during the pilot phase and assistance from outside budget and management experts who helped guide PART’S first year of implementation as members of the Performance Measurement Advisory Council (PMAC) (OMB, 2002b). As a result of these steps, some important changes were made. OMB agreed to publish individual section scores because it was suggested that these better represented OMB’s views of a program’s strengths and weaknesses than the overall rating alone. To recognize program differences, OMB created sub-sets of questions for each of the seven program types and, in section 1, purpose questions were dropped or de-emphasized (reviewers thought they were too political) so that only questions about program design remained. In section 4 on program results, responses were changed from the yes/ no format to a four-point scale, and performance targets and at least one efficiency measure were now required for all programs (OMB, 2002b). OMB concluded that these PART revisions lessened subjectivity and reduced the likelihood that the tool would be applied inconsistently (OMB, 2002c).

At the end of the first rating cycle, OMB organized a group of OMB staff and agency officials to check for consistency. The Interagency Review Panel (IRP) examined 10 percent of the PART assessments using a sub-set of the PART questions that OMB staff identified as being the most subjective or difficult to interpret. The IRP also reviewed the results of formal agency appeals to see if similar situations were treated equitably. The IRP was not reconstituted for the fiscal year 2005 budget process, but OMB requested a similar review by the National Academy of Public Administration (NAPA), the results of which were not made public. Since then, OMB has done consistency checks using its own staff (OMB, 2003a).

Although some changes were made to the PART questions between the first and second year, subsequent changes to the tool have mainly been refinements. However, OMB continues to make process improvements and to further clarify terms and concepts each cycle. Notably, OMB has shifted the timing of PART reviews so that they occur in the summer in advance of OMB’s fall review of agency budget requests. OMB has also worked with the evaluation community to identify evaluation approaches considered best suited for different types of programs (OMB, 2003b, 2004). The fiscal year 2006 PART process has a new option for an abbreviate reassessment. Abbreviated reassessments allow updating of selected questions instead of the entire PART, which is intended to give programs the opportunity to begin using and reporting on new performance measures more quickly than if they were to complete a full reassessment (OMB, 2006).

Using PART to rate programs

The need for a credible evidence-based rating tool for programs was a major impetus in developing PART. However, there is no single bottom-line for most federal programs. Given that programs almost always have multiple purposes and goals, OMB staff had to decide how to apply general principles to specific cases. Inevitably, bottom-line ratings force choices as to which of multiple goals best exemplify a program’s mission. It is not surprising that GAO’s first PART review found inconsistencies in application or that OMB’s internal reviews point to continuing issues (GAO, 2004b). The pressure on agencies (and OMB) to show improvement encourages a determination of program effectiveness even when performance data are unavailable, the quality of those data is uneven, or they convey a mixed message on performance. To create an instrument that is comprehensive without being overwhelming and broad enough to capture information across dissimilar programs, yet meaningful for a specific program, is a tall order. Consequently, PART reflects trade-offs.

OMB’s effort to standardize by using a yes/no format has at times resulted in oversimplified answers and confusion about what was being rated. Agency officials commented that the yes/no format is a crude reflection of reality in which progress in planning, management, or results is more likely to resemble a continuum than an on/off switch. This was particularly troublesome for questions containing multiple criteria for a “yes” answer, which contributed to a number of inconsistencies across program reviews in the first year. OMB responded by either splitting questions or making clear that each element of a multi-part criteria must be answered yes to answer the question affirmatively (GAO, 2004b).

Other discrepancies occurred when the responses to related questions were contradictory or when it was left to OMB examiners to decide whether a measure was long-term or an intermediate target. Again, OMB highlighted the relationship between linked questions to reduce this problem. Although OMB has tried to clarify terms, many PART questions still contain subjective language that is open to interpretation. Examples include terms such as “ambitious” describing sought-after performance measures. Because the appropriateness of a performance measure depends on the program’s purpose, and because program purposes can vary immensely, an ambitious goal for one program might be unrealistic for a similar but more narrowly defined program (GAO, 2004b).

Inconsistencies continue to surface in examiners’ responses to agency progress—some examiners give credit for improvements while others do not. The limited availability of credible evidence on program results is a longstanding condition that continues to constrain OMB’s ability to use PART to rate programs’ effectiveness. The PART confirmed findings by GAO and others on agency data and evaluation capacity generally—that federal agencies do not have sufficient data and studies to evaluate the impact of federal programs (GAO, 1998b; Blalock and Barnow, 2002). Table 10.2 shows that except for 2005, “results not demonstrated” is the rating given most frequently to programs.

Table 10.2.The cumulative PART program results by rating category, 2002-05
Moderately effective24%26%26%29%
Results not demonstrated50%38%29%24%
Total programs234407607794
Source: US Office of Management and Budget.
Source: US Office of Management and Budget.

“Results not demonstrated” or RND ratings have implications beyond PART because they can negatively affect how an agency is rated by another high-visibility indicator, the President’s Management Agenda (PMA) scorecard. Using a simple “stoplight” grading system, agencies are assessed each quarter on meeting standards for five government-wide initiatives: (1) Human Capital, (2) Competitive Sourcing, (3) Financial Performance, (4) E-Government, and (5) Budget and Performance Integration. To get to “yellow” status on the Budget and Performance Integration initiative, agencies must not have more than 50 percent of their programs rated RND. For an agency to achieve “green” on the scorecard, fewer than 10 percent of its programs could have received a RND PART rating for more than two years in a row. This rating is given not only when OMB determines that there are insufficient data to assess a program, but also when it and the agency cannot reach agreement on which goals or measures to use. As one agency official remarked to GAO, “they really understand incentives at OMB” (GAO, 2005b).

This increased focus on performance is often reflected in improved ratings when programs originally rated RND are reassessed. OMB gives priority to these programs, and agencies are motivated to make changes. GAO found that when reassessed, 86 percent of these programs were rated “adequate,” “moderately effective,” or “effective.” Better results might be expected because programs are only reassessed when OMB determines that significant changes had been made to address deficiencies. However, what is less obvious is that the critical determinant of whether a program moves out of RND when reassessed is how high the program scored initially in the program results section (section 4). In other words, positive changes in program purpose and design, strategic planning, and program management were inconsequential if the program had scored low on program results (Government Accountability Office, 2005a).

Then there is the question of the unit of analysis used to assess program performance. Since there is no standard definition for a program,15 what to rate and how useful the results will be often depends on who will use the information and for what purpose.16 In some instances, what was rated aligned with an “account”—for example, the visa and consular services of the Department of State—which is the level at which congressional appropriators make funding decisions. In other cases—such as Medicare17—the unit of analysis for the PART assessment is commonly recognized as a federal program. More often, however, OMB created units of analysis tied to discrete funding levels by both disaggregating and aggregating certain activities that make up a PART-defined program. In some cases, disaggregating for PART reviews ignored interdependencies by artificially isolating related activities from the larger contexts in which they operate. Conversely, in other cases in which OMB aggregated multiple activities with diverse missions and outcomes, it became difficult to settle on a single measure (or set of measures) that accurately captured the multiple missions of these diverse components. Although OMB acknowledges that there is a need for flexibility in defining a program when the budget structure does not reflect the way a program is managed, both of these “unit of analysis” issues exacerbated the problems caused by the lack of available planning and performance information (GAO, 2004b).

PART and the Government Performance and Results Act

OMB should be given credit for creating a tool that reflects the administration’s management principles and the priority given to using performance information in OMB’s decision-making process, but its focus primarily on program-level assessments contrasts with the emphasis of GPRA on an agency’s goal structure and the contributions of multiple programs in achieving results. Although the two processes can be complementary, the intent of the legislative framework was to bring all interested stakeholders into the process to build broad support and give credibility to the results. So while GPRA and OMB’s PART initiative share similar goals—including identifying what is working well and what is not—PART clearly is designed first and foremost to meet OMB’s needs in the federal budget process.

OMB has tried to clarify the relationship between GPRA and PART, but has mainly relied on agencies to decide how they should integrate the two requirements. According to OMB, PART reinforces performance measurement under GPRA by encouraging development of performance measures according to the outcome-oriented standards of the law and by requiring that agency goals be appropriately ambitious.

The PART process should also help agencies develop and identify meaningful performance measures to support their GPRA reporting…. When annual plans and reports include programs that have been assessed in the PART, the measures used for GPRA should be the same as those included in the PART. In all cases, performance measures included in GPRA plans and reports should meet the standards of the PART—they must be outcome oriented, relate to the overall purpose of the program, and have ambitious targets. (OMB, 2006)

To provide decision-makers with program-specific, outcome-based performance data useful for executive budget formulation, OMB had stated its intention to modify and at times replace GPRA goals and measures with those developed under PART. As a result, the goals and measures used in strategic planning are expected to conform to those judged by OMB to be most useful, even if the result eliminated or conflicted with the goals and measures of other GPRA stakeholders. The federal budget process, which is closed until decisions are made, does not provide an opportunity to vet changes with other stakeholders until after the fact. Moreover, while PART does not eliminate the departmental strategic plans created under GPRA, many OMB and agency officials told GAO that PART is being used to shape strategic plans. The emphasis is shifting so that eventually the performance measures developed for PART will drive agencies’ strategic planning processes (GAO, 2004b).

This also was made clear when OMB guidance called for agencies to substitute a performance budget structure starting in fiscal year 2005 for the annual GPRA performance plan18 (OMB, 2003c). Although related primarily to the President’s Management Agenda budget and performance integration initiative and not directly to PART, OMB expects that agencies will analyze programs that contribute to a goal, “including their relative roles and effectiveness, using Program Assessment Rating Tool (PART) assessments when available” (OMB, 2003d). These performance budget presentations generally have not been well received by appropriators. There are a number of reasons for this, not least of which is that it is not the structure used to appropriate funds. In its most recent guidance OMB has suggested that agencies “should consult with relevant congressional appropriations committees to ensure their support for modifications to the format, including the use of the results of PART assessments, of your agency budget documents” (OMB, 2005a).

Integrating PART into executive branch budget formulation

OMB has taken the fruits of the GPRA planning process—goal-setting and performance measurement—and strategically applied them to budget formulation using PART. Not only is OMB invested in using PART for its reviews, but as the timing of PART assessments has moved earlier in the budget formulation process, OMB has made clear that it hopes to influence internal agency budget deliberations as well.

Over the four years of its existence, PART has evolved into a more interactive process between agencies and the OMB staff who rate their programs. Agency self-assessments are no longer optional as they were the first year. Instead, they have become the first step in the assessment process with agencies providing support that they hope will lead to positive ratings by OMB. Agencies increasingly are involved in negotiating which programs will be assessed during the year, setting performance targets, and recommending follow-on actions. OMB holds regular meetings with agency leads responsible for fostering budget and performance integration in their organizations. In turn, PART appears to be a catalyst for bringing agency budget, planning, and program staffs together to respond to PART assessments.

OMB believes that PART provides a systematic way of asking performance-related questions that have always been important in OMB’s reviews of agencies’ budget requests. Both OMB and agency officials said that this has helped to ensure that OMB staff with varying levels of experience are asking the same types of questions and that it has also fostered a more disciplined approach to discussing program performance and agency management.

Agency officials told GAO that, by encouraging more communication between departments and OMB, PART helped illuminate both how OMB makes budget decisions and the way examiners think about program management. OMB managers and staff reported that it led to richer discussions during internal deliberations on what a program should be achieving, whether the program was performing effectively, and, if not, how program performance could be improve. In addition, both agency and OMB officials said that the attention given to programs that were not routinely reviewed was a benefit, although some agency officials complained that this was a burden for small programs that did not have as many resources to devote to the reviews (GAO, 2004b).

By 2003—the second year of PART—OMB had developed tools to help collect and transmit data, but agencies still reported that it was a strain on resources to comply with the PART process. They also provided anecdotal evidence of OMB examiners who had not had sufficient time to review PART reassessments (Government Accountability Office, 2005b). OMB acknowledged that resource constraints were partially responsible because reassessments are taking almost as long as the original reviews.

With PART now in the middle of its fifth year, there continues to be concern that assessments are time-consuming for both agencies and OMB. According to some agency officials, OMB has tried to reduce program examiners’ workload by defining programs in larger units or limiting the number of programs reassessed annually. Whether these managers’ perceptions are the exception or not, capacity issues remain even after OMB limited reassessments to those programs where there was significant evidence of change. OMB for the first time in the current cycle has given agencies under certain conditions the option of seeking an abbreviated program reassessment instead of a full reassessment (OMB, 2006). Whether done, as stated, to speed update of performance information, it also suggests that the PART process continues to be labor-intensive.

Table 10.3 provides a timeline for the executive branch budget formulation process with milestones for the PART assessment denoted in bold.

Table 10.3.Major steps in the executive branch budget formulation process
What happens?When?
Agencies and OMB agree on programs to be assessed or reassessedEarly March
The Performance and Evaluation Team (PET) provides revised PART guidance and workbook, training for agencies and for examiners on PART and PARTWebMarch
OMB issues planning and policy guidance for the next President’s budgetSpring
Agencies’ draft responses/evidence for PART program reviews due to Resource Management Offices (RMOs)aApril
OMB and executive branch agencies discuss budget issues, presidential priorities, and options that will be developed for OMB’s upcoming fall budget reviews.Spring/ Summer
Discussions about PART program assessments between agencies and OMBMay/early June
OMB issues instructions for submitting budget data and materialsJuly
RMOs submit PART assessments to PET for consistency check and feedbackJune/July
RMOs give revised PART assessments and related draft summaries to agenciesJuly
Agency PART appeals and comments due to OMBEarly August
Appeals board provides decisionsMid-August
PARTs updated to reflect appeals board decisionsLate August
Executive branch agencies provide budget requests with supporting materials including how they will address OMB follow-on actions for new and reassessed PARTsSeptember
RMOs draft summaries and agree to improvement plansSeptember
Conduct PART Director’s ReviewMid-October
OMB conducts Fall budget reviews. OMB staff review agency budget proposals in light of presidential priorities, program performance, and budget constraints. They raise issues and present options to the Director and other OMB policy officialsOctober-November
Agencies submit year-end performance data for all measures for all PART assessments (old and new). Update follow-up actions for previously assessed PART programs. New actions may be added if earlier actions completedMid-November
OMB briefs the President and advisors on proposed budget policies. The OMB Director makes funding recommendations for the President’s budget requestLate November
Decisions on budget requests are passed back to executive branch agenciesLate November
President’s budget request to CongressEarly February
PART assessments released with the President’s budget request and supporting information posed to

OMB is organized into Resource Management Offices (RMOs), three statutory management offices that oversee government-wide federal financial management, federal procurement, and information technology and regulations, plus a number of OMB-wide staff and support offices. Program examiners are employed in the RMOs, which are organized by broad areas of government, such as National Security and International Affairs, and Natural Resources, Energy and Science. Examiners review agency budget submissions and make recommendations on funding, policy, and management issues. Currently this process includes assessing selected programs with PART.

Source: GAO adaptation of OMB information.

OMB is organized into Resource Management Offices (RMOs), three statutory management offices that oversee government-wide federal financial management, federal procurement, and information technology and regulations, plus a number of OMB-wide staff and support offices. Program examiners are employed in the RMOs, which are organized by broad areas of government, such as National Security and International Affairs, and Natural Resources, Energy and Science. Examiners review agency budget submissions and make recommendations on funding, policy, and management issues. Currently this process includes assessing selected programs with PART.

Source: GAO adaptation of OMB information.

OMB claims that PART has proven its value in helping the President formulate budget priorities. No doubt PART has played some role, but it is not always clear how much weight a PART assessment carries in resource decisions (although there have been any number of regression analyses done to try to glean this information, including by GAO in the 2004 PART report). A better measure may be found in the list of programs that the administration proposed for termination, reduction, or major reform in its fiscal year 2006 budget proposal (OMB, 2005d).19 Of the approximately 150 discretionary programs proposed for termination or reduction, no more than a third had gone through a PART assessment.

Similarly, OMB identified 35 programs in 11 agencies that contributed to some aspect of community and economic development. The President’s proposal consolidated 18 of the 35 programs, which OMB claimed accounted for 89 percent of the $16.2 billion spent annually for this area. Many assumed that the consolidation proposal was backed by PART assessments when in fact only 8 out of the 18 programs had gone through such an assessment. Ultimately the proposal was not enacted, but it did promote closer examination of these programs, led to Congressional hearings, and gained considerable media coverage. Moreover, agency officials whose programs would have been affected described benefits from the cross-cutting PART review. In the short term, it helped them reach agreement with OMB on goals and objectives for similar programs and several expected that better coordination among program managers would continue as well.

Although OMB generally proposed to increase funding for programs that received ratings of “effective” or “moderately effective” and proposed to cut funding for those programs that were rated “ineffective,” GAO’s reviews confirmed OMB’s statements that funding decisions were not applied mechanically and the relationship between performance levels and budget decisions is not one-dimensional (GAO, 2004b). OMB has consistently said that a good rating does not guarantee a specific level of funding and, that if the obstacle to program improvement was budgetary, a program that had received a low score might get an increase to correct deficiencies. Different choices might be made in other situations. For example, if the program’s problems are prohibitively difficult or costly to fix, shifting funds to a related program with better results might be the best solution. If anything, OMB guidance suggests that most programs should expect to operate with flat budgets. Agencies are asked to set targets that are achievable with their current program characteristics and to assume current budget levels (OMB, 2005b). The President’s budget for fiscal year 2006 made clear, however, that programs that cannot demonstrate their value over several years are likely to see their resources moved to higher-performing programs (OMB, 2005c).

There are several messages to be taken from this. First, no one should be surprised that presidential proposals reflect policy positions made with or without performance analysis. Second, in gaining relevance to the resource allocation process, PART illustrates the trade-offs and risks associated with this more active approach to performance management and budgeting. Far from removing politics from budgeting, the linkage of performance to budgeting raises the stakes associated with performance goals and measures. As such, the performance analysis marshaled to support budget decisions is potentially more vulnerable to political debate and conflict. Third, the “success” of performance budgeting initiatives should not be judged simply by budget changes or program terminations. Previous reforms have been doomed by inflated and unrealistic expectations that performance data could resolve thorny political problems and dilemmas. However, performance information can certainly be helpful to policy-makers in demonstrating whether programs are contributing to their stated goals, are well coordinated with related initiatives, and are targeted to the intended beneficiaries.

PART and program improvement

OMB has stated that PART’S purpose is much broader than assessment and its use in budget decision-making; OMB’s goal is to create a results-oriented federal government in which programs continue to improve. To understand better what this means in practice, GAO analyzed the 1,700 follow-on actions OMB expected agencies to take as the result of PART assessments for 2002-04, all the information available at the time of GAO’s 2005 review.20 As Figure 10.3 shows, these actions have been distributed fairly consistently each year among those expected to improve program management, assessment, or design. The percentage of recommendations explicitly linked to funding proposals has steadily declined from 20 percent in 2002 to 12 percent in 2004.

Figure 10.3.PART recommendations by program type

Source: GAO analysis of OMB data.

GAO found that problems identified by the PART assessments and the recommendations intended to address them were not always clear. Regardless of what types of deficiencies were identified in a PART assessment, more than half of all PART recommendations were aimed at improving the process of program assessment through developing outcome measures and/or goals, creating efficiency measures, or improving data collection. This was especially true for programs rated “results not demonstrated,” but was also a common follow-on action for programs rated “effective” and “moderately effective.” Moreover, programs assessed for the first time in 2004—the most recent year for which data were available—were just as likely to get a recommendation to improve performance assessment information as during the first PART cycle. Of the 797 follow-on actions recommended in the first two years, OMB reported that 30 percent were considered fully implemented, and nearly half of these were for improvements in performance measures or data.

In many ways these findings could have been anticipated because of the number of programs that OMB determined had insufficient or inadequate information to rate their performance. It is also consistent to assert that improving managers’ ability to assess program outcomes and identifying information gaps are necessary steps before expecting observable program improvements. OMB has set an ambitious agenda and it may be that eventually PART will be credited with the program improvements that the administration and others claim (Government Accountability Office, 2005a).

Congressional response to PART

Despite some effort to communicate PART results, OMB has had limited success in engaging Congress in the PART process. In GAO’s 2005 report, interviews with Congressional staff revealed a number of issues and concerns about the design of the PART tool, results of assessments, and the way OMB has communicated PART results.21 Many Congressional staff of both parties expressed skepticism that OMB’s assessments could be impartial and devoid of political influence or that PART could be anything other than an executive branch tool. Some thought that the assessments were targeted for a general audience and did not allow a sophisticated analysis that they or their members would find useful. Some were frustrated with the lack of detail provided in PART summary sheets as to why a program was rated a certain way. Congressional staffs were unlikely to accept conclusions about a program’s performance without seeing the evidence used in support, particularly when the rating was contrary to what they believed to be true about a program. Several said that finding information on OMB’s website was too time-consuming and difficult to use. Others felt that posting information to a website was not a meaningful communication tool if OMB wanted to signal the importance of PART results to Congress.

As with reconciling PART with GPRA requirements, agencies have been at the front line in explaining PART and the result of program assessments to their Congressional committees of jurisdiction. In the view of some Congressional staff, agencies are not in a position to explain the decisions made in OMB’s assessments or to promote the value of PART, particularly if they have received poor assessments. Without consultation by OMB staff on how PART might serve Congressional needs and agreement on what to assess and how to measure performance, Congressional staff almost universally remained unconvinced of the value of PART assessments and ratings to Congressional committees (Government Accountability Office, 2005a). Congressional committees have expressed their support and displeasure with PART in committee reports, but if the House version of the fiscal 2007 appropriations bill for Labor, Health and Human Services and Education becomes law, it will include a general provision that would increase the role of the Congress in the administration’s PART studies. Specifically, the provision states that,

Unless specifically exempted, no funds are provided in the Act to conduct or participate in the conduct of a PART analysis or study unless the Committees on Appropriations of the House and Senate have approved of the study, inclusive of the data on which the analysis will be based, the methodology to be employed and the relative weight of each of the four factors that will be assigned to the study in determining a final score.22

Future of PART

The introduction of PART marks another milestone for performance management and the federal government’s 50-year quest to have performance budgeting take root. Underlying PART and previous efforts is the belief that performance information will only be taken seriously once it becomes institutionalized and serves as a primary driver behind management, including funding decisions. GPRA created a structure for formulating strategic goals and measures, and standardized reporting of this information. Given that strategic plans were the exception and not the rule in federal civilian agencies, this was no small achievement. However, unlike GPRA which is based in statute, PART may end as it began with this administration. Whether or not an assessment process continues, agency officials GAO interviewed credited the PART with increasing attention to the use of performance measurement in day-to-day program management. According to many of these officials, PART has lent support to internal agency initiatives and—despite the limitations of scorecards and bottom-line ratings—has highlighted the need for improvements, motivated agencies to seek more ambitious measures, and underscored the need for quality evaluations (Government Accountability Office, 2005a).

After five years of considerable effort OMB has an infrastructure in place for what could be argued is a missing piece of GPRA, an assessment component that brings planning and performance full circle. Since its inception, the creators of PART have tried to build discipline and standardization through guidance, training, consultation, and formalized assessment consistency checks. And they have developed the PARTweb to help ease the exchange of information between agencies and OMB. With the release of the President’s fiscal year 2007 budget request, OMB launched, a website “aimed specifically at making performance information transparent and readily available to the American people (that is, the first purpose)” (OMB, 2006). This latest effort is aimed at creating demand and expectation by the public that assessments of federal program performance will continue. Figure 10.4 displays information on the US Department of Labor’s Job Corps program, used in previous examples, and what OMB has done to make the results more accessible on its new website.

Figure 10.4.PART assessment for the Job Corps program on ExpectMore.gova

a includes more detailed information about the PART assessment rating process, supporting documents for the specific assessment, and program-related information from the agency’s website.

Source: US Office of Management and Budget.

The challenge in continuing PART or another systematic approach to assessment is how to use such a tool for cross-cutting reviews of spending programs and tax expenditures that contribute to a goal or policy. Some initial efforts at developing common measures have had mixed reviews from the agencies involved and more will need to be done to ensure that the measures are meaningful and provide fair assessments across similar activities. OMB made the decision that it was important to have 100 percent coverage of federal programs, but by doing so it placed less emphasis on examining the relationship between and among programs, which limits PART’S usefulness in re-examining or restructuring federal missions. The administration may have recognized this when it sent a legislative proposal to Congress last year that, if enacted as proposed, would create two kinds of commissions that would continue to rely on PART-like assessments.

The Government Reorganization and Program Performance Improvement Act of 2005 will shift responsibility for program assessment away from OMB to commissions established by Congress that report to the President. The President will decide what agencies or programs the result commissions will study, but target overlap in program, policy areas, or jurisdictions with the goal of “consider[ing] and revisf[ing] Administrative proposals to restructure or consolidate programs or agencies to improve their performance/’ The second type of commission—a Sunset Commission—will “consider Presidential proposals to retain, restructure, or terminate programs” following a schedule of regular review—at least every ten years—and reauthorization. Programs will be terminated automatically if they are not reauthorized by Congress two years after the review or after four years if Congress chooses to extend the deadline for reauthorization (OMB, 2005e).


OMB, through its development and use of the PART, has more explicitly infused performance information into the budget formulation process, increased the attention paid to performance measurement and program evaluation, and challenged Congressional decision-makers and others to act on the PART assessments. The commitment of senior OMB officials and staff clearly signals the importance of this strategy in meeting the priorities outlined in the President’s Management Agenda, and PART provides agencies with powerful incentives for improving data quality and availability. If these measures and supporting data meet the needs of other stakeholders as well, then eventual acceptance of performance metrics in budgeting may come as the result of a “culture change” brought about by generation of valid information Qoyce, 1993). At the very least, OMB should be credited with opening up for scrutiny—and potential criticism—its review of program performance and then making these assessments available to a potentially wider audience.

Performance budgeting initiatives, such as PART, should not be expected to provide the answers to resource allocation questions in some automatic or formula-driven process. Despite its bottom-line conclusions, PART is diagnostic not definitive, and while it can add an important perspective, it is, as OMB has acknowledged, only one among many factors in making resource decisions. In fact, long after PART ratings have been forgotten, this OMB’s influence over measures and data collection may be the true legacy of the PART process. Moreover, if PART ratings continue to highlight the need for evaluation research and data, and if, in turn, this leads to better program design, greater effectiveness, or increased efficiency, then the enormous time and effort put into the PART process by OMB and agencies may pay off. What is harder to predict is whether assessments will continue—and in what form—after OMB reaches its milestone next year of assessing all federal programs at least once.

As noted above, performance questions and problems do not have a single budgetary answer—performance problems may at times warrant greater investments to improve operations while high-performing programs may suffer cuts if they prove to be lower in priority than other competing claims. As the agenda formation literature suggests, reframing the kinds of questions raised in policy debates constitutes an important political event with potentially major consequences for decisions (Kingdon, 2002). Thus, we might reasonably expect performance budgeting reforms like PART to force issues associated with outcomes and efficiency of programs onto decision-makers’ radar screens. At the same time, we should recognize that technical or analytic exercises do not do a good job of addressing equity considerations, unmet societal needs, and the competing values of policy choices for different communities and individuals. Other factors such as the overall budget situation, the state of the economy, and the appropriate role of the federal government are also important considerations (Government Accountability Office, 2005a). This we leave to elected officials and the political process, where as citizens we can provide our own assessments of how well the federal government is doing at election time.


The author would like to acknowledge Paul Posner, Jackie Nowicki, and others who contributed to the 2004 and 2005 GAO PART reviews.

In addition to OMB documents related to the President’s Management Agenda and its Budget and Performance Integration initiative, the other main sources for this chapter are GAO (2004b, 2005b).

We examined the first year of PART’S implementation (2002) and use in the fiscal year 2004 budget process in our first report. Our objectives were to determine if PART was being applied consistently and whether it complemented the existing statutory framework established by the Government Performance and Results Act (GPRA) (Pub. L. No. 103-62 (1993)). We were also asked to identify the strengths and weaknesses of PART as a budget and evaluation tool. For that report, GAO reviewed OMB materials on the development and implementation of the PART as well as PART assessment results. To assess whether PART was applied consistently, GAO performed analyses of data from the PART program summary and assessment worksheets for each of the 234 programs rated. GAO also reviewed 28 programs in nine clusters covering food safety, water supply, military equipment, procurement, provision of health care, statistical agencies, block grants to assist vulnerable populations, energy research programs, wildlife management, and disability compensation to determine if comparable or disparate criteria were applied in producing the PART results for these program clusters.

As part of GAO’s examination of the usefulness of the PART as an evaluation tool and to obtain agency perspectives on the relationship between PART and GPRA, GAO interviewed department and agency officials including senior managers, and program, planning, and budget staffs at the Departments of Health and Human Services (HHS), Energy (DOE), and Interior (DOI). Concurrently, OMB officials at each level of the organization were interviewed regarding their experiences. We selected these three departments because they had a variety of program types (for example, block/formula grants, competitive grants, direct federal, and research and development) that were subject to the PART and could provide a broad-based perspective on how PART was applied to different programs. Although not generalizable, the consistency and frequency with which similar issues were raised by OMB and agency officials suggested that our review reliably captures significant aspects of PART as a budget and evaluation tool. Finally, GAO did a series of regression analyses to show the relationship between PART scores and funding levels in the President’s fiscal year 2004 budget. Overall PART scores had a positive and statistically significant effect on discretionary program funding. The programs evaluated by OMB include both mandatory and discretionary programs. Since funding for mandatory programs (mainly entitlements) are in almost all cases not the result of annual budget decisions, regression results for mandatory programs showed—as expected—no relationship between PART scores and the level of funding in the President’s budget proposal. PART scores explained at most about 15 percent of the proposed funding changes in fiscal year 2004, leaving a large portion of the variability in proposed funding changes unexplained. This suggests that most of the variance is due to institutional factors, program specifics, and other unquantifiable factors.

Our October report revisited some of the findings raised in our earlier report—the relationship of PART and GPRA, unit of analysis and measurement concerns, and workload issues. However, the focus of our recent work was on the effect PART recommended follow-on actions were having on program operations and results, and congressional perspectives on OMB’s efforts to involve Congress in the PART process. Once again we interviewed OMB officials and agency managers at HHS and DOE. In addition we spoke with officials at the Department of Labor (DOL) and an independent agency, the Small Business Administration (SBA). We compiled a database of PART assessments and recommendations for the three years available, 2002 through 2004. Our purpose was to discern possible changes or patterns in recommended follow-on actions and the relationships between the type of recommendation made, overall ratings, PART section scores, and answers to selected PART questions. For our final objective, to gauge the effectiveness of OMB’s outreach to Congress, we interviewed House and Senate staff (majority and minority) of the Authorization and Appropriations Committees with oversight jurisdiction for our selected departments and agency. We also reviewed Congressional hearing records and reports for any mention of the PART or PART results for specific programs. Our findings are not generalizable to the PART process for all years or all programs.

These and other GAO reports are available at GAO’s website: <>. The General Accounting Office was later renamed the Government Accountability Office.

Article I, section 9, clause 7 (the so-called Appropriations Clause) refers to the power of Congress not only to appropriate funds, but also to prescribe the conditions governing the use of those funds. This clause has been described as the most important single curb on Presidential power in the US Constitution.

Entitlements currently account for over two-thirds of federal expenditures annually and are growing at a much faster rate than funding for what are referred to as discretionary programs. Entitlements are most often formula based or determined by eligibility criteria that changes infrequently, unlike funding for discretionary programs, which is controlled by the Appropriations Committees and subject to shifting priorities of the political process.

For an in-depth discussion of earlier reforms, implementation approaches, and contributions to performance budgeting in the US federal government, see GAO (1998a).

The Government Performance and Results Act (Pub. L. No. 103-62 (1993)) requires federal agencies to develop strategic plans, annual performance plans, and annual program performance reports. GAO has published a large body of work on GPRA that can be found at it website, but two reports in particular are worth noting. (1) The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven, (GAO, 1997). Under the GPRA Act, GAO was required to report to Congress on agencies’ progress in implementing GPRA and the prospects for compliance by executive agencies beyond those participating in the pilot phase. The last of the reports to meet this mandate included the results of a survey of federal civilian managers and senior executives to get their perspectives on the Act and implementation challenges. (2) Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results (GAO, 2004a). This ten-year GPRA retrospective found that the Act had succeeded in establishing strategic planning and a more results-oriented focus in the federal government, but that many challenges remained, including difficulties in setting outcome-oriented goals, collecting useful data on results, linking institutional, program, unit, and individual performance measurement and reward systems, as well as inconsistent leadership commitment in agencies.

By requiring that an agency’s annual performance plan cover each program activity in the President’s budget request for that agency, GPRA established a basic foundation for linking resource allocation decisions and results. However, recognizing that agencies’ program structure is not consistent across appropriations accounts (the level that ties to funding decisions), the Act allows agencies to consolidate, aggregate or disaggregate program activities so long as no major function or operation of the agency is omitted or minimized. For a recent in-depth discussion of restructuring efforts related to OMB’s performance and budget integration initiative, see GAO (2005a).

The President’s Management Agenda includes five government-wide initiatives: (1) Strategic Management of Human Capital, (2) Competitive Sourcing, (3) Improved Financial Performance, (4) Expanded Electronic Government, and (5) Budget and Performance Integration, of which the Program Assessment Rating Tool (PART) is a key element.

In February 2006, OMB released the most recent set of PART assessments along with the President’s fiscal year 2007 budget request. Although GAO has not reviewed this latest PART cycle, information from OMB on the results of these assessments has been included in tables and narrative as appropriate to provide the latest information available.

The weight given to questions can be altered to emphasize key factors for a specific program and a “not applicable” response is sometimes used when a question is not relevant. However, the overall weight given to a section is the same for all PART assessments.

The seven program types are: (1) competitive grants, (2) block/formula grants, (3) capital assets and service acquisition programs, (4) credit programs, (5) regulatory-based programs, (6) direct, federally-delivered programs, and (7) research and development programs.

<>. This website also includes guidance and further information on the Budget and Performance Initiative of the President’s Management Agenda.

Summary sheets were published the first year in a separate volume of the President’s fiscal year 2004 budget and in subsequent years on a CD-ROM included in the Analytical Perspectives of the President’s Budget volume and at OMB’s website. A comprehensive list of programs, with their ratings and funding, is updated annually in the Analytical Perspectives.

The PET model has continued to be used with members rotating in and out at the beginning of a new cycle of assessments. In the last two cycles, the PET has also conducted consistency checks after PARTs are completed, but before the final PART is passed back to the agency and agencies have the opportunity to appeal.

There is no standard definition for the term “program” in the US federal budget. For purposes of PART, OMB describes a program (or unit of analysis) as (1) an activity or set of activities clearly recognized as a program by the public, OMB, and/or Congress; (2) having a discrete level of funding clearly associated with it; and (3) corresponding to the level at which budget decisions are made. Although these criteria may be reasonable, not having a recognized definition for a program complicates the PART assessment process because it adds another potential point of disagreement between OMB, the agency, and Congress on what should be included or excluded in an assessment. Consider the possible advantages or disadvantages if a PART-defined program is (a) an entire agency, including administrative functions; (b) a “name-brand” program that is recognized as such by the agency, Congress, and the public, for example, Job Corps; (c) one piece of a larger program, for example, the enforcement component of the customs service; or (d) a group of related activities that are not managed together because they are in different agencies and, in some instances, different departments, for example, rural water supply projects, grants, and loans.

A companion effort to the PART under the Budget and Performance Initiative of the PMA was to have agencies submit a “performance budget” as a way of rationalizing what is currently an appropriations account structure that was not created as a single integrated framework, but rather developed incrementally to address specific needs. Beginning with the fiscal year 2005 President’s budget request, agencies were instructed to, where possible, restructure their appropriations accounts in a way that would align accounts and program activities with “programs or the components of the programs that contribute to a single strategic goal or objective” (OMB, 2003c, pp. 51-3). The response generally from appropriators and their staff was that the restructured budgets did not meet their needs, did not align with how agencies operate, were not supported by agency accounting systems, and did not provide information that was useful (OMB, 2005b).

The US federal health insurance program for people 65 years or older, certain younger people with disabilities, and people with end-stage renal disease.

OMB defines a performance budget as a presentation that clearly explains the relationship between performance goals and the costs for achieving targeted levels of performance. The requirements for the performance budget include an overview of what an agency expects to accomplish by strategic goal; what it has accomplished; strategies used to influence outcomes; an analysis of programs that contribute to a strategic goal, their relative roles, effectiveness as identified by PART and other evaluations; performance targets for current and budget years, how they will be achieved, and how they relate in a “pyramid” of outcomes; what resources are requested; and, to the extent possible, the full cost of the program. This should be cross-walked to the budget structure used by appropriations committees.

Since entitlement programs and other types of direct spending are not subject to the annual budget process, this information applies only to those budget proposals that would affect funding for discretionary programs. Discretionary programs are defined in the US federal budget process as funding for programs that are provided in and controlled by appropriations acts, other than those that fund mandatory programs.

A GAO study of PART follow on actions related to evaluation can be found at the GAO website. See GAO (2005c).

GAO interviewed House and Senate committee staff (majority and minority) for the Authorizing and Appropriations Subcommittees with jurisdiction over the four agencies selected for our case study. Additional views of legislative staff can be found in the GAO 2005 report entitled: Performance Budgeting: PART Focuses Attention on Program Performance, but More Can Be Done to Engage Congress (GAO, 2005b).

H.R. 5647, Department of Labor, Health and Human Services, Education and Related Agencies Appropriations Act, 2007, Title V—General Provisions, Sec. 521.


    BlalockA.B. and B.S.Barnow2002Is the New Obsession with Performance Management Masking the Truth About Social Programs?” in Quicker Better Cheaper?: Managing Performance in American Governmented. DaleForsythe (Albany, NY: Rockefeller Institute).

    DanielsM.E.2002Linking Program Funding to Performance Results.Testimony before the Subcommittee on Government Efficiency Financial Management and Intergovernmental Relations of the House Committee on Government Reform 107th Cong. 2nd Session (Washington: US Government Printing Office).

    FantoneD.M. and PL.Posner2003United States Case Study on Accountability and Control in Modernization Reforms,paper presented at the OECD Expert Meeting on Accountability and ControlMadridOctober29-302003.

    General Accounting Office (GAO)1997The Government Performance and Results Act: 1997 Government-wide Implementation Will Be UnevenGAO/GGD-97-109June21997 (Washington: GAO) pp. 315269.

    General Accounting Office (GAO)1998aPerformance Budgeting: Past Initiatives Offer Insights for GPRA ImplementationGAO/AIMD-97-46March27 (Washington: GAO) pp. 463542.

    General Accounting Office (GAO)1998bProgram Evaluation: Agencies Challenged by New Demand for Information on Program ResultsGAO/GGD-98-53April24 (Washington: GAO) pp. 234.

    General Accounting Office (GAO)2004aResults-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater ResultsGAO-04-38March10 (Washington: GAO) p. 64.

    General Accounting Office (GAO)2004bPerformance Budgeting: Observations on the Use of OMB’s Program Assessment Rating Tool for the Fiscal Year 2004 BudgetGAO-04-174January30 (Washington: GAO).

    Government Accountability Office2005aPerformance Budgeting: Efforts to Restructure Budgets to Better Align Resources with PerformanceGAO-05-117SPFebruary (Washington: GAO).

    Government Accountability Office2005bPerformance Budgeting: PART Focuses Attention on Program Performance, But More Can Be Done to Engage CongressGAO-06-28October28 (Washington: GAO).

    Government Accountability Office2005cProgram Evaluation: OMB’s PART Reviews Increased Agencies’ Attention to Improving Evidence of Program ResultsGAO-06-67October28 (Washington: GAO).

    JoyceP.G.1993Using Performance Measures for Federal Budgeting: Proposals and Prospects,Public Budgeting and Finance Vol. 13(4) pp. 317.

    KingdonJ.2002Agendas, Alternatives, and Public Policies (New York: Longman Press).

    Office of Management and Budget (OMB)2001The President’s Management Agenda—FY2002August2001 (Washington: OMB) p. 9.

    Office of Management and Budget (OMB)2002aOMB Budget Procedures Memorandum No. 852Addendum 1 Attachment A Spring Review Program Effectiveness Ratings Guidance for Selecting Programs and Attachment B Instructions for the Program Assessment Ratings Tool General GuidanceMay102002 (Washington: OMB).

    Office of Management and Budget (OMB)2002bNational Advisory CouncilPerformance Measurement Advisory Council (PMAC) Summary of MeetingJune272002 (Washington: OMB).

    Office of Management and Budget (OMB)2002cOMB Memoranda to the Heads of Departments and AgenciesM-02-10 Program Performance Assessments for the FY 2004 BudgetJuly262002 (Washington: OMB).

    Office of Management and Budget (OMB)2003aExecutive Office of the President. Analytical Perspectives Budget of the United States Government Fiscal Year 2004 (Washington: US Government Printing Office) pp. 45.

    Office of Management and Budget (OMB)2003bOMB Budget Procedures Memorandum No. 861Completing the Program Assessment Rating Tool (PART) for the FY2005 Review ProcessMay52003 (Washington: OMB).

    Office of Management and Budget (OMB)2003cExecutive Office of the President, OMB Circular A-11Preparation Submission and Execution of the Budget Fiscal Year 2003 Part 6 Sec. 220 (Washington: US Government Printing Office).

    Office of Management and Budget (OMB)2003dOMB Memoranda to the Heads of Departments and AgenciesM-03-17 Program Assessment Rating Tool (PART) UpdateJuly162003 (Washington: OMB).

    Office of Management and Budget (OMB)2004OMB Budget Data Request No. 04-31Completing the Program Assessment Rating Tool (PART) for the FY2006 Review ProcessMarch222004 (Washington: OMB).

    Office of Management and Budget (OMB)2005aExecutive Office of the President, OMB Circular A-llPreparation Submission and Execution of the Budget Sec. 51November22005 (Washington: US Government Printing Office).

    Office of Management and Budget (OMB)2005bBudget Procedures Memorandum No. 879Addendum 2 Additional Guidance to Improve Consistency in PART Assessments and Updated 2005 PART Process ScheduleSeptember302005 (Washington: OMB).

    Office of Management and Budget (OMB)2005cExecutive Office of the PresidentAnalytical Perspectives Budget of the United States Government Fiscal Year 2006 (Washington: US Government Printing Office) pp. 1012.

    Office of Management and Budget (OMB)2005dExecutive Office of the PresidentMajor Savings and Reforms in the President’s 2006 BudgetFebruary11 (Washington: US Government Printing Office).

    Office of Management and Budget (OMB)2005eExecutive Office of the PresidentThe Government Reorganization and Program Performance Improvement Act of 2005June30 (Washington: US Government Printing Office).

    Office of Management and Budget (OMB)2006OMB Program Assessment Rating Tool Guidance No. 2006-02Guidance for Completing 2006 PARTsMarch102006 (Washington: OMB) pp. 6568.

    TanakaS.J.O’Neill and A.Holen2003Above the Fray: The Role of the US Office of Management and Budget,” in Controlling Public Expenditure: The Changing Roles of Central Budget Agencies—Better Guardians?ed. J.Wannaet al. (Cheltenham: Edward Edgar) pp. 5780.

    Other Resources Citing This Publication