2. RA-FIT and Performance Measurement
- Andrea Lemgruber, Andrew Masters, and Duncan Cleary
- Published Date:
- May 2015
The RA-FIT country data were consolidated and formatted so as to extract data tables that can be used to analyze and understand emerging challenges and trends in revenue administration, and eventually, over time, to establish baselines to monitor and assess performance. This is an essential aspect of strategic management. Modern revenue administrations use strategic management as a systematic process to (1) set their long-term goals, (2) design and implement business plans to achieve these goals, and (3) regularly monitor their performance against targets to assess whether the organization is moving in the desired direction—and to adjust their plans, if needed.
Performance measurement lies at the core of the strategic management process. Nevertheless, in many developing countries systematic performance measurement is not a common practice. Indeed, revenue administrations generally lack comprehensive and transparent performance indicator systems—which limits their level of effectiveness. RA-FIT is intended to help close this gap by providing a standard platform that allows revenue administrations to report on and measure performance, and benchmark themselves against peer countries.
The Round 1 Survey
The Round 1 survey consisted of four key parts: revenue statistics (revenue), institutional arrangements (general), tax operations, and customs operations.
There were seven questions in the Revenue Statistics part, requiring the values for GDP and revenue by tax type for a three-year period. These data are used only as the basis for some indicators, such as taxpayer stratification and segmentation.
The Institutional Arrangement part contained 19 questions, divided into various functional administration categories that are mostly qualitative in nature, designed to provide classifications and descriptions of the design and structure of the revenue administration.
Responses to these questions were used to analyze if there were any strong correlations among certain structural choices, the degree of administrative autonomy, and the use of information technology.
The Tax Operations part contained 26 main questions, many with multi-part answers, broken into various categories, generally designed to focus on particular baseline indicators:
- The Overview section covered questions relating to expenditures and staffing levels, providing data used in collection efficiency baseline indicators.
- The Tax Office Activity questions relate to staffing and revenue collections at specific offices, and are again used in the estimation of office specific collection efficiency baseline indicators.
- The Large Taxpayer Office (LTO) section relates to the structural design of the LTO, and its revenue and staffing levels. These data were used in designing baseline indicators comparing the effectiveness and efficiency of large taxpayer administration vis-à-vis general operations, and also in identifying common international LTO trends.
- The Taxpayer Registration questions relate to the breakdown and numbers of taxpayers by taxpayer type for baseline indicators related to filing rates and the yield per taxpayer.
- The Income Tax Filing questions relate to establishing on-time and late filing rate baseline indicators.
- The VAT Threshold and Taxpayer Stratification questions are to analyze the distribution of revenue across the various groups of VAT taxpayers used in assessing whether there are trends, locally and internationally, that can be used in the construction of a related baseline indicator.
- The VAT Filing questions are for establishing baseline indicators related to on-time and late VAT return filing as well as the composition of VAT returns filed (for example, credit, debit, and nil).
- The VAT Refund questions regarding claims made and refunds authorized per period is for establishing baseline indicators related to VAT refund process efficacy.
- The Arrears, Audit, and Objections and Appeals categories of questions pertain to the stock and flow in each of these areas, for a range of associated baseline indicators.
- The Customs Operations part contained 19 questions divided into six categories. Again, the categorization of the questions is based on the baseline indicators to which they may contribute:
- The Overview section covers questions relating to expenditures and staffing levels, providing data used in collection efficiency baseline indicators.
- The Border Post Activities questions relate to staffing and revenue collections at specific customs posts, and are again used in the estimation of post specific collection efficiency baseline indicators.
- The Importers/Exporters category of questions are designed to assess the distribution of revenue across the various sizes of traders, which is used in assessing whether there are trends, locally and internationally, which can be used in the construction of a related baseline indicator.
- The Processing and Inspection questions are designed to gather information around processing and inspection, and to assess baseline indicators of the efficiency of these operations.
- The Arrears, Audits, and Appeals questions pertain to the stock and flow in each of these areas, and are used in assessing a range of associated baseline indicators.
- The Transactions questions pertain to the breakdown of the various customs activities by the nature of the transaction (for example, import versus export) and by their tax treatment (fully taxable versus exemptions). These data are used in establishing baseline indicators related to the effective level of collections.
Response Rates and Sample for Round 1
Eighty-six countries (of the 119 targeted) provided responses in time to be included in this analysis, an overall response rate of 72 percent. These responses had an average completion rate of 70 percent. High completion rates were achieved for the relatively easy to complete questions on institutional arrangements (general) and revenue statistics (86 and 87 percent, respectively). The worksheets proving more difficult to complete were those on tax operations, where on average the completion rate was 62 percent, and customs operations, where on average the completion rate was 58 percent. These operational parts contain the quantitative data that is most useful for undertaking an in-depth analysis of revenue administration performance.
The survey sample (86 respondents) is largely comprised of low-income and lower middle-income countries (around 59 percent of the total), mainly from Africa, Central America, and the Caribbean. Table 1 analyzes the responses by income group.2 The results reflect the fact that the first round of RA-FIT targeted countries covered by the IMF’s Regional Technical Assistance Centers (RTACs). The intention was to start the exercise by focusing on developing countries to understand their needs with a view to support their strategic management function and to improve the FAD’s technical assistance (TA) in these countries. An additional goal was to create a database on revenue administration performance information that covered countries generally not having been the focus of other international comparative studies.3 In this sense, RA-FIT was testing uncharted waters.
|General||Revenue||Tax Operations||Customs Operations||Overall|
|Income Group||Completion Rate (percent)||Respondents (number)||Completion Rate (percent)||Respondents (number)||Completion Rate (percent)||Respondents (number)||Completion Rate (percent)||Respondents (number)||Completion Rate (percent)||Respondents (number)|
|Low-Income Countries (LICs)||89||20||92||21||59||21||52||16||68||21|
|Lower Middle-Income Countries (LMICs)||85||30||85||30||62||29||60||20||70||30|
|Upper Middle-Income Countries (UMICs)||84||28||91||28||62||28||61||21||71||28|
|High-Income Countries (HICs)||88||7||69||7||67||7||60||6||70||7|
Given this was the first attempt to systematically gather information on revenue administration in a large group of developing countries, the response and completion rates exceeded expectations. However, considerable time and effort was required to achieve this response. Indeed, many of the targeted administrations: (1) are comprised of less-mature administrations; (2) have poor management information systems; and (3) have significant capacity constraints. The RA-FIT initiative highlighted the urgent need for further TA in the development of performance measurement and management frameworks required by many administrations. It also focused attention on performance measurement and management across a large revenue administration population, perhaps for the first time on such a large scale, with many acknowledging that their inability to quickly locate data was a sobering, if not disconcerting experience. A number of administrations are also using RA-FIT as a starting point for the development of their own internal performance measurement frameworks.
Figure 1 shows the geographic distribution of RA-FIT responses across IMF membership. RTACs played an important role in supporting countries in the completion of RA-FIT in their regions, and it is also around their regions that most responses are clustered. The IMF has nine RTACs, one in the Caribbean (CARTAC), one in Central America, including the Dominican Republic (CAPTAC-DR), five in Africa (AFRITAC Central, East, South, West, and West2), one in the Middle East (METAC), and one in the Pacific (PFTAC). In addition, the IMF had two resident regional advisors in southeastern Europe at the time of the first round of RA-FIT, which accounts for the cluster of responses in this region.
Figure 1.Distribution of the RA-FIT Respondent Universe
Sources: Google map and RA-FIT Round 1 respondent countries.
Of the surveys received, 63 included the customs operations part. The reason for fewer customs returns is that RTACs are not always engaged with the customs administration, particularly where it is not combined with the tax administration.
Limitations and Caveats Regarding Round 1 Data
The RA-FIT first round should be seen as the start of many further efforts to gather comprehensive tax and customs data on a wide range of topics from a large number of countries. The first round data, while certainly not perfect, provide a fresh insight into the current status of revenue administration, particularly in the developing world, and form a starting point for future rounds of RA-FIT. As such the first round is the start of a process that will evolve and improve over time.
When reporting the first round results of the RA-FIT there are two main areas that need to be considered regarding data quality, namely responder bias and structural data issues.
With the first area of responder-related bias in the survey responses, there are a number of issues that need to be borne in mind when interpreting the RA-FIT results. Firstly, not all countries that were asked to participate actually responded to the survey. Of the original 119 countries, 72 percent (86/119) responded, within which there was an average completion rate for the survey of 70 percent. The point being made is that the sample is not necessarily representative of the full population (all IMF member countries), but sufficiently large to make comparisons between the respondents and draw some useful conclusions, which can be built on over time with future rounds of RA-FIT. This is most acute for HICs, of which only seven respondents participated, mainly in the Caribbean, Latin America, and southeastern Europe—the only HICs supported by RTACs or resident regional advisors.
The lack of responses to some sections of RA-FIT was often the result of an absence of available information in the case of the countries’ tax and customs administrations; some of them even lack basic IT systems. If tax and customs administrations were separate entities in a country, this often led to a survey response that lacked the customs elements, which were only completed in 73 percent (63/86) of cases for which RA-FIT returns were received.
Some countries were unable to provide data for all of the relevant years requested. There were some issues with how the questions were interpreted by respondents, often caused by uncertainty regarding definitions of key concepts such as what constitutes an audit, what constitutes tax arrears (including taxes in dispute, or just undisputed arrears), what constitutes an active taxpayer, categories of staff functions, and the scale that should be used when replying to questions requiring numeric values (for example, thousands versus millions). Many of these issues have been addressed in the second round of RA-FIT through clearer instructions to respondents, the new online interface, and greater engagement with partner organizations such as Inter-American Center of Tax Administrations (CIAT) and the World Customs Organization (WCO). Nevertheless, issues will no doubt continue to surface, and over time will need to be addressed and resolved.
The second area to consider is the structural nature of the data and the participants themselves. By their nature, the respondents are diverse. This fact will have an effect on the distribution of numeric values such as staff numbers, audit yield, GDP, population of taxpayers, and many ratios relating to these, such as tax revenue as a proportion of GDP and tax staff ratios, especially when reporting results summarized at an overall level (as opposed to income group or regional levels). The RA-FIT analysis did identify outliers in the response data and in some cases these were adjusted or transformed to mitigate their effect on the summary statistics used in the report. For example, where the data supplied were obviously erroneous—for example, expected annual returns for a specific tax type exceeded the number of registered taxpayers for the tax—these errors were addressed in consultation with country officials. On a related matter, many numeric values are highly skewed, with some extreme values, but the majority of values are at the lower end of the scale. Again, this has an effect on averages and totals that are reported in this paper.
In addition to issues of data quality, comparability of data among countries may also be challenging given the differences in fiscal year-ends. For example, comparing the VAT return filing rate for an administration with a March 31 fiscal year-end to an administration with a December 31 fiscal year-end will in essence be comparing two different 12-month periods, that is, assuming the data are for 2010, the former administration will supply data covering the period April 1, 2009, to March 31, 2010. while the latter administration will supply data covering the period January 1, 2010, to December 31, 2010. There is no easy solution to ensuring that all data are perfectly aligned, and the cost of gathering and adjusting data to coincide outweighs the benefits. Accordingly, future analysis of later RA-FIT rounds will not attempt to adjust data to a single and matching point in any year.
To recap, while the data gathered from the first round has much value and use, readers should note that it has been affected by the issues outlined above, among others, and that the first round is the first step of an ongoing and evolving process.
An analysis of responses for each question in each of the four RA-FIT worksheets has been useful in identifying areas in which administrations had difficulties and were unable to adequately respond.
As already stated, a relatively high average completion rate of 86 percent was attained for the General worksheet. Most of the information sought was readily available with the exception of staff distributions by function, where 16 percent of respondents were unable to supply the required information.
The completion rate for the Tax Operations worksheet was lower than the two previously mentioned worksheets, at 62 percent. The five most challenging aspects for tax administrations were (1) determining the age of tax arrears—53 percent of respondents were unable to supply any information; (2) objection and appeal stock and flow information—45 and 43 percent of respondents respectively were unable to supply the requested information; (3) basic VAT stratification information—37 percent of respondents were unable to supply any information; (4) VAT returns by type, that is, debit, credit or nil returns—35 percent of respondents were unable to supply the requested information; and (5) stock and flow of tax arrears by tax type—30 percent of respondents were unable to supply the requested information.
The Customs Operations worksheet also proved more difficult for those administrations responding where the average completion rate was 58 percent. The five most challenging aspects for customs administrations were the following: (1) providing details of other agencies involved in the import and export processes alongside customs—52 percent of respondents failed to furnish any information; (2) information pertaining to customs appeals—36 percent were unable to supply the information requested; (3) information in respect of post clearance activity—23 percent of respondents were unable to supply the requested information; (4) violation and penalty information—23 percent of respondents were unable to supply the required information; and (5) details of revenue foregone as a result of relief granted—21 percent of respondents were unable to supply the requested information.