Anti-Money Laundering and Combating the Financing of Terrorism - Review of the Quality and Consistency of Assessment Reports and the Effectiveness of Coordination
Author:
International Monetary Fund
Search for other papers by International Monetary Fund in
Current site
Google Scholar
Close

This paper responds to a request from Executive Directors to review the quality and consistency of anti-money laundering and combating the financing of terrorism (AML/CFT) assessment reports prepared by the Financial Action Task Force on Money Laundering (FATF) and FATF style regional bodies (FSRBs), and the effectiveness of coordination with FATF/FSRBs. The review found a high degree of variability in the quality and consistency of reports prepared by the different assessor bodies as well as within the same assessor group. While a large majority of reports were of high- or medium quality with respect to key components of the assessments, the treatment of ratings gave rise to greater problems. A number of initiatives have been taken or are underway to improve the quality and consistency of assessments by all assessor bodies, including: the standardization of documentation, the strengthening of peer/internal reviews, and the intensification of assessor training.

Abstract

This paper responds to a request from Executive Directors to review the quality and consistency of anti-money laundering and combating the financing of terrorism (AML/CFT) assessment reports prepared by the Financial Action Task Force on Money Laundering (FATF) and FATF style regional bodies (FSRBs), and the effectiveness of coordination with FATF/FSRBs. The review found a high degree of variability in the quality and consistency of reports prepared by the different assessor bodies as well as within the same assessor group. While a large majority of reports were of high- or medium quality with respect to key components of the assessments, the treatment of ratings gave rise to greater problems. A number of initiatives have been taken or are underway to improve the quality and consistency of assessments by all assessor bodies, including: the standardization of documentation, the strengthening of peer/internal reviews, and the intensification of assessor training.

Executive Summary

This paper responds to a request from Executive Directors to review the quality and consistency of anti-money laundering and combating the financing of terrorism (AML/CFT) assessment reports prepared by the Financial Action Task Force on Money Laundering (FATF) and FATF style regional bodies (FSRBs), and the effectiveness of coordination with FATF/FSRBs.

The review found a high degree of variability in the quality and consistency of reports prepared by the different assessor bodies as well as within the same assessor group. While a large majority of reports were of high- or medium quality with respect to key components of the assessments, the treatment of ratings gave rise to greater problems.

A number of initiatives have been taken or are underway to improve the quality and consistency of assessments by all assessor bodies, including: the standardization of documentation, the strengthening of peer/internal reviews, and the intensification of assessor training.

Experience in coordination with other assessor bodies raised two practical issues:

  • the difficulties in the coordination of FATF/FSRB mutual evaluations with FSAPs and OFC assessments;

  • the application of the policy of including AML/CFT assessments in all FSAPs/OFC assessments in light of subsequent Board discussions on (i) streamlining of assessments; and (ii) updates of assessments of codes and standards.

In a number of cases, results of FATF/FSRB mutual evaluations have not been available for incorporation in due time into Board documents. Going forward, staff will engage with FATF/FSRBs to reconfirm their commitment to ensure that timely information is available to the Fund/Bank to support the discussion of substantive issues in FSAP/OFC documents.

Consistent with the separate Board decisions on AML/CFT, FSAP updates and the updating of ROSCs, staff is proposing the following policy on coverage of AML/CFT in FSAPs/OFC assessments:

  • An AML/CFT assessment would be needed if one has not been conducted previously using either the 2002 methodology or the 2004 methodology agreed by the IMF and World Bank Boards and the FATF;

  • A full AML/CFT reassessment should be conducted about every five years;

  • If there has been a prior AML/CFT assessment that is less than five years old, it would be appropriate for the FSAP/FSAP update or OFC assessments to include some form of update, using the 2004 Methodology, that would range from a factual update to a full reassessment, depending on a number of criteria elaborated in the paper, as set out in paragraphs 29 and 35.

The above policy would apply both to initial FSAPs and FSAP updates but with due regard to the Board’s call for prioritization and streamlining of assessments (such as by conducting factual updates through smaller missions).

I. Introduction

1. In July and August 2002, respectively, the IMF and the World Bank Boards endorsed a 12-month pilot program of anti-money laundering and combating the financing of terrorism (AML/CFT) assessments using two approaches to assessments: (1) assessments led by Fund and Bank staffs; and (2) assessments conducted by the FATF and FSRBs. The Boards agreed that assessments, conducted by the FATF/FSRBs, would result in Reports on the Observance of Standards and Codes (ROSCs) subject to a proforma review by Fund/Bank staff.

2. As AML/CFT ROSCs are the only ROSCs that are prepared by bodies other than the IMF and the World Bank, Executive Directors requested a comprehensive review at the end of the pilot program that would inter alia focus on the quality and consistency of assessments and ROSCs within and among the FATF/FSRBs as well as on the effectiveness of coordination with the FATF/FSRBs. Based on this review, the Boards would decide on whether to continue and/or modify the arrangements for assessment under the pilot program. Staff considered that in order to conduct an adequate review, the FATF and each FSRB would need to have completed at least three mutual evaluations.

3. In the March 2004 report on the review of the pilot program,1 staff noted that only the FATF and GAFISUD had completed the requisite three AML/CFT assessments; MONEYVAL and ESAAMLG had completed one each; the CFATF had undertaken five assessments but had not completed any written reports, and the APG had undertaken one assessment jointly with the FATF and that report remained outstanding. As such, the Board agreed with the staff proposal that the arrangements under the pilot program continue pending completion of a substantive review of the quality and consistency of FATF/FSRB assessments and ROSCs in 18-months time.

4. The first objective of the review was to evaluate the quality and consistency of AML/CFT assessment reports prepared by the FATF/FSRBs, as well as the IMF and the World Bank, under the pilot program of assessments and to propose a way forward to address any identified issues. The second objective was to assess the effectiveness of coordination with the FATF/FSRBs, as well as the integration of AML/CFT assessments in the FSAP/OFC program.

II. The Quality and Consistency of AML/CFT Assessment Reports

5. The review of the quality and consistency of assessments was prepared by Fund/Bank staff in collaboration with FATF/FSRBs. This collaboration involved the following elements:

  • establishment of a coordinating group involving Fund/Bank staff and representatives from the FATF/FSRBs that participated in the pilot program (the APG, CFATF, ESAAMLG, FATF, GAFISUD and MONEYVAL). The group agreed the terms of reference for the review, nominated a panel of experts to conduct the technical review, and provided comments on the draft report of the technical group;

  • preparation of the technical review by experts drawn from the assessor bodies;

  • presentation of the findings of the technical review to the FATF/FSRBs for their feed back.

The report of the panel of experts, including their terms of reference, and the composition of the review panel are attached.

A. Results of the Technical Review

6. The panel of experts selected a representative sample of 23 assessment reports prepared by the FATF, certain FSRBs,2 the Fund, and the Bank based on the 2002 AML/CFT assessment methodology that was used during the pilot program. The panel found a high degree of variability in the quality of reports prepared by the different assessor groups (the Fund, the Bank, FATF, and FSRBs) as well as within the same assessor groups. Some 70 percent of reports were of high- or medium quality with respect to the two key components of description/analysis and recommendations, which form the core of the assessment work. The treatment of ratings gave rise to greater problems, which may be attributed, in part, to the relative novelty of this approach in AML/CFT assessments. Box 1 describes in more detail the key findings of the panel of experts.

Key Findings of the Panel of Experts on the Technical Review of Quality and Consistency of AML/CFT Reports

  • There has been a high degree of variability in the quality of reports prepared by the different assessor groups (the Fund, the Bank, FATF, and FSRBs) as well as within the same assessor groups. Among the sample of 23 reports selected, only three were of an acceptable standard (i.e., of high or medium quality) across all chapters. Approximately 70 percent achieved this standard when dealing with the two key components of description/analysis and recommendations, which form the core of the assessment work.

  • The treatment of ratings gave rise to greater problems, which may be attributed, in part, to the relative novelty of this approach in AML/CFT assessments.3 There was generally greater consistency in quality (across all sections) when dealing with matters relating to the FIU and law enforcement, while some components of the financial sector preventive measures gave rise to particular problems, often in line with their relative complexity. Some elements of the methodology were also easier to assess than others, which was reflected in uneven quality within individual reports.

  • While not explicitly identified, the review notes that two assessor groups tended to perform consistently above average, while two performed consistently below average. However, in all the groups (except the two most consistently good performers) there were marked differences in the quality of individual reports, suggesting that the problems encountered by the worst performers might not necessarily be systemic within the group.

  • The description and analysis sections are the cornerstone of the assessment since they provide the basis for both the recommendations and ratings. While approximately 40 percent of the chapters were of good quality, a similar proportion had material deficiencies and 15 percent had serious deficiencies such as to put into question the overall value of the section. The principal deficiencies underlined by the panel of experts included a failure to address fully the core assessment criteria, the provision of insufficient detail in the description, and an inadequate analysis of the effectiveness of the measures in place. More generally, the coverage of terrorist financing issues was of a far lower standard than that of money laundering issues, possibly due to the relative lack of exposure to these issues within some of the assessor groups.

  • When considering the recommendations made by assessors to address the weaknesses identified in the description and analysis, the panel of experts found that over 50 percent of the chapters were of good quality, 28 percent had material deficiencies and 14 percent had serious deficiencies. The deficiencies identified included recommendations that were too general and failed to recommend any corrective action to address identified deficiencies.

  • A problem area was the assignment of ratings that were not always sufficiently justified by the analysis. As noted above, this is in part attributable to the relative novelty of this approach for FATF/FSRBs. Moreover, the role of FATF/FSRB Plenaries in reviewing and modifying ratings was also noted as an issue. Less than 50 percent of the chapters were of good quality, some 30 percent had material deficiencies and 20 percent had serious deficiencies.

7. The newness and complexity of the 2002 assessment methodology used during the pilot program were important factors in the variability of assessment quality. The main lesson learned from the report and staff’s experience is that the technical capacity of several assessor bodies needs to be strengthened. This will be particularly important for the more recently established FSRBs.

B. Initiatives Taken or in Train to Improve Quality and Consistency

8. The pilot program proved an important learning process with the lessons reflected in the revised standard as well as in the revised assessment methodology and procedures adopted in 2004. In addition to standardized documents and procedures, much greater emphasis has since been given to the training of assessors for which there is now a formalized uniform and coordinated ongoing program. Staff and assessor bodies believe that many of the reasons for the shortcomings in assessments identified by the panel of experts have already been addressed.

9. The 2002 assessment methodology was structured both topically and sectorally and did not map very easily with the FATF Recommendations. While the 2004 assessment methodology significantly expanded the number of assessment criteria relative to the 2002 Methodology, consistent with the increased and more detailed scope of the revised FATF 40+9 Recommendations, it maps perfectly with the FATF Recommendations.

10. An assessor’s handbook has been developed by the FATF, in collaboration with the Fund/Bank, which provides extensive guidance on the assessment process and methodology and includes simplified and clearer templates for the pre-assessment questionnaire and the assessment report.

11. Enhanced guidance on CFT measures was provided through the issuance of a number of interpretative notes on the nine special recommendations, which were incorporated as detailed criteria in the 2004 Methodology. This should improve the analysis of CFT measures in assessments conducted using the 2004 Methodology.

12. Standardized assessor training materials have also been developed by the FATF/FSRBs in collaboration with the Fund/Bank, and an active program of training seminars has been underway since September 2004 with significant involvement of Fund/Bank staff.

13. A working group has been established within the FATF that allows for exchanges on the assessment experiences of all assessor bodies and the resolution of issues relating to the interpretation of the standard and the application of the assessment methodology.

14. Fund/Bank staff no longer rely on independent anti-money laundering experts (IAEs) to cover law-enforcement issues and non-macro relevant sectors, which had not been well integrated into Fund/Bank reports, as indicated in the Joint Report on the Review of the Pilot Program. As a result of the March 2004 Board decisions, the Fund and the Bank are now fully accountable for all aspects of AML/CFT assessments, including in the sectors formerly covered by IAEs.

15. In light of recent experience with assessing against the significantly revised, expanded and more complex standard, Fund/Bank staff have reinforced internal review mechanisms. A greater emphasis has been placed on the importance of thorough review of assessments. Formal in-house training has been provided to Fund/Bank assessors to raise understanding of the revised standard and of the assessment methodology and procedures. Ad hoc working groups also meet at the request of assessors to review complex issues raised during missions.

16. Several assessor bodies have adopted or are in the process of introducing internal measures to ensure the quality of assessment reports. These measures are geared towards strengthening the review of assessment reports either through establishing an expert group charged with reviewing the reports prior to discussion and adoption by the competent body or by generally strengthening the peer review mechanisms already in place. Box 2 provides further details on these initiatives.

17. With the agreement of the FSRBs, the FATF has recently adopted a policy to extend enhanced status for FSRBs within the FATF. FSRBs currently hold the status of observers and the intention is to grant the status of associate member to FSRBs that meet certain preconditions and obligations. We understand that these include allowing the opportunity for FATF assessors to participate in FSRB mutual evaluation teams, having procedures for evaluations similar to that of the FATF, and ensuring that the procedures are effective, e.g., lead to the production of quality reports. Staff understand that FSRB applications for enhanced status could be considered by the FATF at its plenary in June 2006.

FATF/FSRB Initiatives Taken or in Train to Improve the Quality and Consistency of AML/CFT Assessment Reports

FATF: The Secretariat of FATF bears significant responsibility for ensuring quality and consistency of MERs. In addition, FATF has introduced in February 2006 an Expert Review Group mechanism according to which each MER submitted to the plenary will be reviewed by a small expert group. The group is tasked to identify any inconsistencies with other MERs as well as with any issues that require further clarification or interpretation. FATF has also revised the mutual evaluation procedures to allow more time for the preparation, conduct and completion of mutual evaluation.

MONEYVAL: Like FATF, MONEYVAL secretariat plays an important role in ensuring the quality of MERs through preparing the draft report and providing technical support to the evaluators. MONEYVAL is currently considering introducing an ad hoc expert group mechanism drawing on its pool of experts to assist the Secretariat, the Plenary, and the assessors in ensuring the quality and consistency of MERs.

GAFISUD: In order to enhance the quality of MERs, GAFISUD have been implementing an annual program of training assessors since 2004. In addition, GAFISUD conducted two joint MERs with FATF and the Secretariat participated as an observer in a Fund conducted assessment. Members of the Secretariat also attended as trainees in an evaluators’ training seminar organized by FATF in January 2006.

18. In addition to assessor training, Fund and Bank staff have contributed to the efforts to increase the capacity of all assessor bodies to conduct assessments, including allowing assessors from FSRB members to participate, at their own cost, as observers in Fund/Bank assessment missions. Moreover, with a view to complementing FATF/FSRB peer review procedures, Fund and Bank staff are providing substantive comments on FATF/FSRB mutual evaluation reports prior to their consideration in plenary and have offered to participate in “expert review group” processes established by FATF/FSRBs.

19. It is recommended that staff report to the Board in about five years’ time on the effectiveness of these new measures to enhance the overall quality and consistency of assessments conducted by different assessor bodies.

III. Coordination and Integration of AML/CFT Assessments in the FSAP/OFC Process

20. The Board endorsed in March 2004 the policy to include AML/CFT assessments in all FSAP/OFC assessments, and confirmed that the assessments could be conducted by (i) the IMF/WB; or (ii) FATF/FSRBs in the context of their own mutual evaluations.4

21. The inclusion of AML/CFT assessments as part of the FSAP/OFC process enables Fund and Bank staff to incorporate financial sector integrity issues into broader financial sector reform efforts.

22. Implementing the above policy has given rise to two practical issues:

  • The difficulties of coordinating FATF/FSRB mutual evaluations with FSAPs, FSAP updates and OFC assessments;

  • The application of this policy in light of subsequent Board discussions on (i) the streamlining of assessments; and (ii) updates of assessments of codes and standards.5

A. Coordinating FATF/FSRB Mutual Evaluations with FSAPs, FSAP Updates and OFC Assessments

23. The Fund/Bank have agreed with FATF/FSRBs on a policy of burden sharing and reciprocity of assessments to avoid duplication of assessments and to ensure that the results of assessments, whether undertaken by the Fund/Bank or the FATF/FSRB, would be available to be reflected in FSAPs/OFC assessments.6 The understanding with FATF/FSRBs in relying on their mutual evaluations for FSAPs/OFC assessments is that the Fund and the Bank would need a short summary of the AML/CFT findings, a ROSC, and the underlying detailed assessment, recognizing that there would be a need for flexibility in the timing for the delivery of the ROSC and the detailed assessment. In a number of cases, results of FATF/FSRB mutual evaluations have not been available for incorporation in due time into Board documents relating to the FSAP (see Table 1).7

Table 1.

Integration of Mutual Evaluations into FSAP/OFCs since June 2004

article image

24. There are two main reasons for the absence of FATF/FSRB findings in some FSAP/OFC documents:

  • The timing of FATF/FSRBs mutual evaluations is often not coordinated with FSAPs/OFC assessments. FATF/FSRBs mutual evaluations, which are mandatory for their members, are scheduled two to three years in advance based on an approximate five-year assessment cycle. FSAPs/OFC assessments are voluntary and are typically scheduled on shorter notice.

  • Even when the FATF/FSRB schedule is coordinated with that of the FSAP and OFC, effective integration is hampered by the length of time required to finalize FATF/FSRB mutual evaluations. FATF/FSRB mutual evaluations normally require discussions at the plenary before finalization, and plenaries are infrequent (between one to three times annually depending on the body).

25. Staff and FATF/FSRBs have sought to address the coordination problems either by FATF/FSRB adjusting schedules to match those of the FSAPs or staff proposing that the Fund/Bank conduct the assessment where such an adjustment is not possible. However, the scope for addressing coordination problems is limited by (i) the difficulties faced by the FATF/FSRBs in adjusting their schedules; (ii) the preferences of the member being assessed regarding the organization that will conduct the assessment and the timing of the assessment; and (iii) the limited resources available to the Fund and the Bank to take on additional assessments. Consistent with the resource envelope available for AML/CFT, the Bank/Fund will each aim to conduct 6-7 assessments or reassessments per year. In any event, given the high cost of the AML/CFT assessment both for assessor groups and the authorities, it is important that duplication is avoided. 8

26. To improve coordination, staff will engage with country authorities and the FATF/FSRBs at an earlier stage in the planning of FSAPs/OFC assessments. At present, discussions on scheduling of AML/CFT assessments with country authorities and FATF/FSRBs normally commence once the dates for the FSAPs/OFC assessments are known. Going forward, staff will initiate these discussions once an agreement has been reached that an FSAP/OFC assessment will take place even if the dates for such an assessment are not final.

27. As noted above, the FATF/FSRB practice of finalizing their reports through their plenaries has delayed the transmittal of the key findings and ROSCs. Staff will engage with FATF/FSRBs to reconfirm their commitment to ensure that timely information is available to the Fund/Bank to support the discussion of substantive issues in FSAP documents. While the utmost efforts will be made to ensure that AML/CFT issues will be covered in FSAPs, one cannot rule out the possibility that because of scheduling problems, there will be cases where AML/CFT assessments will not be available at the time of the FSAP. In such cases, the ROSC should be completed and made available as soon as possible thereafter.

B. AML/CFT Assessments and Updates

28. The March 2004 Board paper on AML/CFT noted that, with the evolution of the FSAP program towards a larger number of FSAP updates, it could be appropriate to conduct countries’ AML/CFT assessments about every five years. A five-year frequency was viewed as consistent with the FATF/FSRB mutual evaluation schedules and the planned updates for OFC assessments. In August 2005, the Boards endorsed a policy on updating ROSCs. The Fund’s Board noted that “updating the current stock of ROSCs at a fairly high frequency would be too costly and supported a more flexible approach. This approach features an average update frequency of five years with flexibility in frequency and scope to allow for country specific circumstances.9” The Board’s March 2005 discussion on FSAPs endorsed a policy on updates, which would include factual updates of key standards and codes. Directors also agreed that updates can include additional elements if justified by new developments or particular risks. Flexibility would maximize the programs’ usefulness to country authorities and its contribution to surveillance.10

29. Consistent with the above policies and with the Boards guidance on including AML/CFT in all FSAPs and OFC assessments, staff is proposing the following policy on coverage of AML/CFT in FSAPs/OFC assessments:

  • An AML/CFT assessment would be needed if one has not been conducted previously using either the 2002 Methodology or the 2004 Methodology agreed by the IMF and the World Bank Boards and the FATF;

  • A full AML/CFT reassessment should be conducted about every five years;11

  • If there has been a prior AML/CFT assessment that is less than five years old, it would be appropriate for the FSAP/FSAP update or OFC assessments to include some form of update. The content of the update would range from a (a) factual update12 (unless the AML/CFT assessment is relatively current (e.g., within the last 18 months) and the relevant finding can be incorporated into the FSAP) to (b) a full reassessment. The scope of the update would depend on a number of criteria, including: (a) the magnitude of the AML/CFT vulnerabilities identified in earlier reviews; (b) the systemic importance of the vulnerabilities, taking into account the effects on the member’s economy or on the international system, and (c) the timeliness and coverage of the previous assessment (taking into consideration the quality and consistency of the previous assessment). In making such determination, staff will take into account the views of the authorities as it does with other key standards.

The above policy would apply both to initial FSAPs and FSAP updates but with due regard to the Board’s call for prioritization and streamlining of assessments (such as by conducting factual updates through smaller, less resource intensive missions).

30. In the event that the FSAP/update/OFC assessment is to take place shortly before (e.g., within 18 months) a FATF FRSB mutual evaluation, efforts would be made to ensure coordination consistent with the discussion set forth in paragraphs 26 and 27.

31. AML/CFT updates by other than the IMF/WB. There is not yet a clear equivalent or agreed process with the FATF/FSRBs on how AML/CFT updates are to be conducted, as part of their follow up to mutual evaluations. The FATF is currently developing a follow-up process. Staff will explore with FATF/FSRB possible reciprocal arrangements on using the updates prepared by the different assessor bodies, including the use of FATF/FSRB updates for the purposes of FSAP/OFC assessments.

C. Treatment of Assessments Prepared Under the 2002 Methodology

32. A number of countries have been assessed using the methodology developed in 2002, which was based on a previous FATF standard. The FATF Recommendations were revised in 2003. FATF considers the revised standard as “a new comprehensive framework for combating money laundering and terrorist financing.13” A new methodology was developed in 2004 to reflect the fact that most of the recommendations were rewritten.

33. In that context, an issue arises as to how to treat the cases where a jurisdiction assessed under the 2002 Methodology volunteers for an FSAP/OFC assessment within five years of this assessment.

34. It needs to be recognized that, as a general matter, Fund/Bank policy on updates of FSAPs/ROSCs does not specifically address the situation where a significant revision to the standard has occurred. In this regard, it should be noted that the development of revised standards is not unique to AML/CFT: there have been revisions to other key standards.

35. With respect to situations where jurisdictions that have been assessed under the 2002 Methodology volunteer for an FSAP/OFC assessment within five years of the AML/CFT assessment, the staff proposes that, going forward, we will update these assessments using the 2004 Methodology. The scope of the update, which may include a full reassessment, will be determined by the criteria identified in paragraph 29 above. If it is determined that a factual update would be appropriate, the factual update would need to include an additional element assessing the authorities’ initiatives to address the new areas covered by the revised standard/methodology.

IV. Resource Implications

36. The proposed policy on AML/CFT updates would be handled within the existing resources assigned to AML/CFT.

V. Issues for Discussion

37. Directors may wish to discuss the adequacy of the measures to enhance the quality and consistency of assessments outlined in Section II B.

38. Directors may wish to discuss the proposed approach to conducting AML/CFT assessments and updates in the context of FSAPs/OFC assessments outlined in paragraphs 29 and 35.

Annex I Technical Review of the Quality and Consistency of AML/CFT Assessments

TECHNICAL REVIEW OF THE QUALITY AND CONSISTENCY OF AML/CFT ASSESSMENTS

REPORT BY THE INDEPENDENT PANEL OF EXPERTS

(On behalf of the IMF and World Bank)

6 October 2005

List of Acronyms

AML

Anti-money laundering

APG

Asia/Pacific Group on Money Laundering

Basel

Basel Committee on Banking Supervision

CFATF

Caribbean Financial Action Task Force

CFT

Combating the Financing of Terrorism

ESAAMLG

Eastern and Southern African Anti-Money Laundering Group

FATF

Financial Action Task Force

FATF 40+8

FATF 40 Recommendations on Combating Money Laundering (1996) and the Special 8 Recommendations on Terrorist Financing (2001)

FIU

Financial Intelligence Unit

FSAP

Financial Sector Assessment Program

FSRB

FATF-Style Regional Body

GAFISUD

South American Financial Action Task Force

IAE

Independent AML/CFT Expert

IAIS

International Association of Insurance Supervisors

IOSCO

International Organization of Securities Commissions

MONEYVAL

Council of Europe Select Committee of Experts on the Evaluation of

Anti-Money Laundering Measures

NCCT

Non-Cooperative Countries and Territories

OFC

Offshore Financial Centre

R [+number]

FATF Recommendation on Combating Money Laundering

ROSC

Report on the Observance of Standards and Codes

SR [+ number]

FATF Special Recommendation on Terrorist Financing

STR

Suspicious Transaction Report

Introduction

Background to the review

1. When, in 2002, the Fund/Bank Boards endorsed the FATF Recommendations as one of the standards for which Reports on the Observance of Standards and Codes (ROSCs) are prepared, they did so on condition that the ROSC principles would be observed. These principles require that the process is uniform, voluntary and co-operative.

2. AML/CFT ROSCs are the only ones that are prepared by bodies other than the IMF and World Bank, and are accepted on the basis of a pro forma review by Bank/Fund staff. The Boards mandated that they be included in all FSAP and OFC assessments, resulting in the need to have a high degree of collaboration and coordination with the FATF and FSRBs in order to avoid duplication with the mutual evaluation programs.

3. As a result, when the Fund/Bank Boards endorsed the FATF standard and the launch of the 12-month pilot program of assessments14 in 2002, they requested that an analysis be undertaken of the quality of FATF/FSRB assessments and their consistency with the ROSC principles. Such an analysis was to be conducted as part of the review of the pilot program called for by the Boards, but could not be conducted by the end of 2003 because there was an insufficient number of assessment reports available at that time. Therefore, in March 2004 the Boards reaffirmed their desire for a review of the quality and consistency of FATF/FSRB reports, and requested that the work be completed within 18 months from that date. In undertaking this current project in conjunction with the FATF and FSRBs, the decision was taken to expand the scope by including a sample of Fund and Bank reports completed under the pilot program.

4. To oversee the current review, a coordination group was established to include the Fund, the Bank, the FATF and all the FSRBs that undertook mutual evaluations under the pilot program using the 2002 assessment methodology15. The role of the coordination group was to set the terms of reference for a panel of experts (“the panel”) that would be selected to undertake the review; to establish the criteria against which the quality and consistency of reports were to be reviewed; to select the panel of experts; and to provide comments on the draft technical report by the panel.

The Review Panel

5. The IMF and World Bank established at the outset that the panel of experts to conduct the review should comprise five persons (including a chairman), drawn from a list of candidates nominated by the FATF and the FSRBs. Not all the organizations responded to the request for nominations. The final panel comprised:

article image

6. All the members of the panel have extensive experience of conducting mutual evaluations for the respective organizations, and in most cases have also undertaken assessments for the IMF or World Bank.

Terms of reference

7. The terms of reference and accompanying guidance notes were drawn up by the coordination group, and presented to the panel at its initial meeting in Washington DC on 4-5 May, 2005. The general objectives were stated as follows:

The purpose of the review is to assess the quality and consistency of FATF/FSRB and Fund/Bank AML/CFT assessments reports. The review panel should answer the following questions:

- Is the quality of the assessments satisfactory?

- Is the quality of assessments consistent, both between individual assessments and between groups of assessments? (The reference to “groups of assessments” means assessments conducted by a single international or regional body)

In answering these questions the panel will evaluate a sample of detailed assessment/mutual evaluation reports prepared on the basis of the 2002 AML/CFT assessment methodology against the criteria set out[by the coordination group]. The sample is to be chosen by the panel

8. The specific criteria set for the panel, together with the guidance notes that were provided at the same time, are reproduced in Annex 1.

Executive Summary

9. It is clear that the use of the 2002 methodology during the pilot program was a learning process for all involved, since this was the first formal methodology by which AML/CFT assessments had been undertaken. The timing of the adoption of the document varied between assessor groups, and, in the absence of any common guidelines, the precise method of employment and interpretation appear to have varied between groups. Therefore, inconsistencies in both quality and format may not be surprising.

10. The overall quality of reports reviewed by the panel varied significantly. Among the sample of 23 reports selected, three were of an acceptable standard (i.e. of high or medium quality) across all chapters, but approximately 70% achieved this standard when dealing with the two key components of description/analysis and recommendations, which form the core of the assessment work. The treatment of ratings gave rise to greater problems, which may be attributed, in part, to the relative novelty of this approach in AML/CFT assessments. There was generally greater consistency in quality (across all sections) when dealing with matters relating to the FIU and law enforcement, while some components of the financial sector preventive measures gave rise to particular problems, often in line with their relative complexity.

11. Similarly, the panel identified significant variations in the quality of reports undertaken by different groups and within the same group. Two assessor groups tended to perform consistently above the average, while two generally fell below. However, in all the groups (except the two most consistently good performers) there were marked differences in the quality of individual reports, suggesting that the problems encountered by the worst performers might not necessarily be systemic within the group.

12. The description and analysis is the cornerstone of the assessment since it provides the basis for both the recommendations and ratings. Approximately 85% of the chapters16 were of an acceptable standard in this area, although they continued to show a range of deficiencies that were not sufficient to undermine the value of the section. Eight reports were free of any serious deficiency. The chapters on the FIU, law enforcement, the general framework, record-keeping and internal controls were of a consistently higher quality than the others, while those addressing customer identification, enforcement and supervisory co-operation tended to present the greatest challenges. The principal deficiencies included a failure to address fully the core assessment criteria, the provision of insufficient detail in the description, and an inadequate analysis of the effectiveness of the measures in place. More generally, the coverage of terrorist financing issues was of a far lower standard than for money laundering issues, possibly due to the relative lack of exposure to these issues within some of the assessor groups.

13. When considering the recommendations made by assessors to address the weaknesses identified in the description and analysis, approximately 80% of the chapters achieved an acceptable standard in relation to the questions asked of the panel. Nine reports were free of any serious deficiency. The chapters addressing the FIU, international co-operation, the ongoing monitoring of accounts, record-keeping and internal controls were of a consistently higher quality, while the most serious deficiencies were noted in the areas of customer identification, enforcement and supervisory co-operation. The most common deficiencies found in many reports involved a failure to address fully the weaknesses that had been identified in the system, and the provision of recommendations that appeared too vague or general to assist the jurisdiction in remedying the weaknesses.

14. The ratings section clearly presented a greater problem in many reports, apparently caused in part by the complexity of the process within the structure of the methodology, and partly by the fact that the concept was largely new in AML/CFT assessments. It must also be recognized that, in the case of the FATF and FSRBs, the final arbiter of the rating is the plenary, not the assessment team. Nonetheless, approximately 80% of the chapters were of an acceptable standard, although only four reports were free of any serious deficiency. A higher quality was achieved more consistently in those chapters dealing with record-keeping and the general framework, while the most persistent problems occurred in the coverage of customer identification, ongoing monitoring of accounts, integrity standards and supervisory co-operation. The most common grounds for the deficiency were an apparent mismatch between the rating and the description and analysis, and a lack of evidence to support the rating.

15. Generally, there was little consistency in the format and content of the reports, either between groups or within the same group. Three assessor groups tended to have developed a more standard structure internally than the other groups, but the reports of most groups showed signs of evolution over the period of the pilot project. While this was clearly a positive aspect inasmuch as it improved the quality and substance of the reports, it gave rise to internal inconsistencies that the panel was asked to identify. Of greater importance was the marked variation in the depth of description and the extent of the analysis in different reports. Again, with limited exceptions, there was no clear consistency within the work of individual groups, which implies, perhaps not surprisingly, that the overriding influence is the skill and style of the individual assessors.

16. Although it was not within the original terms of reference of this report, the panel has ventured some thoughts on the causes of the deficiencies identified in the review, and has also indicated where it believes improvements have already taken place in the context of the introduction of the new methodology in 2004. These comments are contained in the final part of this report.

General Issues

Limitations to the terms of reference

17. It should be noted that the panel has not been asked to make recommendations on the future conduct of assessments or the composition of reports. Therefore, no such recommendations are made. In its original terms of reference the panel was also restricted from drawing on its own experience of the assessment programs to offer possible explanations for why certain deficiencies or inconsistencies might have arisen within the assessment reports, or to indicate where revised procedures are now believed to have addressed the issues. However, during the consultation stage with the coordinating group on production of the draft of this report, it was agreed that such commentary would be a valuable addition. This is now contained in the final part of this report.

18. It should also be noted that the panel was specifically instructed not to make any direct comparisons in the detailed or overall performance of the various assessment bodies. In presenting its findings, the panel has encountered difficulties in offering an analysis that does not at least imply some form of comparison. However, it believes that this report contains nothing that breaches the spirit of the limitation imposed.

Reports, chapters and sections

Throughout this analysis, specific terminology is used to distinguish between the different elements of the reports that were reviewed. The term “chapter” is used to refer to one of the fourteen component parts of the detailed assessment report (criminalization, confiscation, the FIU, law enforcement, international co-operation, general framework for preventive measures, customer identification, ongoing monitoring of accounts, record-keeping, STRs, internal controls, integrity standards, enforcement, and co-operation between supervisors). The term “section” is used to refer to each of the three elements within each chapter (i.e. Description/analysis, recommendations and rating). In total, therefore, the panel reviewed 23 reports, 322 chapters and 966 sections.

The “IAE” issue

19. During the pilot project the Fund and Bank were prevented, by a decision of their boards, from undertaking the assessment of law enforcement issues addressed within the methodology. As a result, an arrangement was constructed whereby these sections17 were completed by an Independent AML/CFT Expert (IAE), who was nominated by the relevant FSRB, but who was not under the supervision or control of either the Fund/Bank or the FSRB. The panel sought guidance from the co-ordination group as to whether it should review these sections of the reports, and, if so, how it should treat them for comparative purposes. The advice received was that the IAE sections should be reviewed and assessed as free-standing elements within the individual reports.

20. In the event, the panel found that it was impossible to adopt this approach for a number of reasons, specifically:

  • a) Not all the Fund/Bank reports had adopted the agreed convention of highlighting the IAE component in italics;

  • b) In some cases individual paragraphs, or even sentences, were interspersed with italics, thereby providing no real substance for analysis;

  • c) In one case where two chapters were each supposed to be split between the IAE and the Fund/Bank, all of one section was in italics and all of the other in normal typeface, suggesting either a formatting error, or a failure to apportion the work correctly; and

  • d) The ratings were set in normal typeface, thereby making it impossible to identify whether the Fund/Bank or IAE had been responsible for the decision.

As a result, the panel decided to consolidate the IAE component within the Fund/Bank report, especially since this appeared to make no significant difference to the overall analysis.

Approach to the review

21. The panel first met in Washington DC on 4-5 May to be briefed by the Fund/Bank, and to agree its working practices within the confines of the instructions provided by the coordinating group. The panel was provided with a list of country reports completed within the pilot project, from which it was requested to select a sample of up to twenty-five for review18. In making its selection, the panel had wished to review an equal number of reports (specifically, three) from each body responsible for the assessments. In the event, this was not possible, either because the total number completed by a particular body fell short of three, or because approval could not be obtained from the body, or its member countries, for the use of certain reports in the review. Therefore, the sample used in certain cases is not significant in statistical terms, and may not provide a reasonable example of the typical output from particular assessment bodies19.

22. Where there was a choice in excess of three, the panel opted, where possible, for reports that reflected both a spread of dates across the pilot project period, and a mix of countries in terms of size and geographical location. Unfortunately, the options available with respect to one of the bodies meant that a member of the panel was conflicted, his having been an assessor for all the reports within the sample. As a result, this member had no involvement in any of the analysis of those reports.

23. Each member of the panel was allocated a group of chapters to review across all the reports, with the allocation being based, as far as possible, in line with the members’ respective specialist skills. In addition, each member was allocated one or more groups of reports to review in their entirety, in order to take a view on both the internal consistency of the reports and the level of consistency within individual groups.

24. The panel met a second time in London on 5-6 July to review progress to date, to ensure as far as possible that a consistent approach was being adopted by each member, and to pool initial views on the results of the analysis. A final meeting was held in Washington DC on 30-31 August to review and finalize the draft report, which was subsequently submitted to the Fund/Bank on 2 September for onward transmission to the co-coordinating group for comments. The co-coordinating group was also provided with copies of all the panel’s working papers in line with the instructions. The draft report was amended to reflect some, but not all, the comments received, and was resubmitted to the coordination group for a final round of comments on 4 October. The panel wishes to thank the coordination group for its contribution to the process of completing this report.

Review of Quality

Detailed Assessment Report Template

25. In an effort to harmonize the format of the detailed assessment reports, a box template20 was developed early in the pilot project (by December 2002), which all participants were encouraged to adopt, although there was never any formal agreement that this should provide the common standard for the structure of the reports. All except one of the reports under review adopted the general structure of the template (the exception being a report completed very early in the project), although there were some noticeable variations in the formatting of the reports that reflected differing approaches to the assessment. In a small minority of cases the reports show a criterion-by-criterion approach to the description and ratings sections, while the majority provide a consolidated narrative text that seeks to address the overall themes covered by the criteria. In the panel’s opinion this difference of approach does not give rise, in principle, to significant inconsistencies in the overall assessment. In the case of the criterion-by-criterion approach, it is more easily apparent to the reader whether all the relevant issues have been considered by the assessors, but it is more difficult to check the quality of the ratings as the material is spread widely throughout the document. With the more general approach, if the report is silent on a particular issue, it can only be assumed that the assessor has not considered it.

26. While the template undoubtedly helped in ensuring greater completeness and consistency of the reports, it did, unfortunately, have several typographical errors and omissions within it that appear to have given rise to a number of inconsistencies in the reports. The extent of these depended on whether a particular group identified the problem and modified the template accordingly. However, where modifications were made, there was no consistency of approach adopted by the various bodies.

27. The more significant problems identified in the template by the panel are shown in the following table. As far as possible, the panel has not sought to be critical of reports that have followed the template rigidly, except in those cases where the error in the template creates such an extraordinary result that it could reasonably be expected that the assessor and relevant group should have recognized that a problem existed.

article image

Overview of the quality of reports

28. Great care needs to be exercised when considering the following analysis, since the various sections of each chapter of the reports are mutually dependent, with the completeness and quality of one section having a direct impact on others. The description and analysis section is paramount, since it basically provides most, if not all, the input required to make sense of the following sections. Therefore, when reviewing the recommendations and ratings section, the panel was able only to assess whether they reflected a logical progression (with supporting evidence) from the information provided by the description and analysis. The panel had no way of determining whether the basic information was either accurate or complete, even in those cases where all the criteria were expressly addressed. To be able to do so would have required access to the underlying information available to the assessors.

29. Perversely, a very brief and inadequate description may lead to the view that the recommendations and ratings accurately reflect the description (simply through a lack of evidence to the contrary), whereas a comprehensive description may provide more material on which to judge the recommendations and ratings to be deficient. Similarly, it is important to note that a statement of “good quality” in relation to the recommendations and ratings must be seen purely in the context of this exercise: it is impossible to assess whether the recommendations (or indeed, absence of them) and ratings are a true reflection of the needs and position of the jurisdiction, since the panel clearly could not determine the completeness or accuracy of the description in absolute terms. The limitations of the review in this area are particularly acute with respect to the discussion of ratings, and the relevant analysis in this report should be understood accordingly.

30. In the analysis of quality the panel has used the following terminology21:

“Good quality” is used to refer to work that may contain some deficiencies, but where these were considered to be minor.

A “material deficiency” is defined as one where the panel considered the deficiency to be important, but not such that it brought into question the overall value of the section.

A “serious deficiency” is one which the panel considered undermined the completeness, accuracy and value of the section.

This categorization could alternatively be considered in terms of high, medium and low quality, where high and medium are considered to be of an acceptable standard in relation to the questions that the panel was asked to address.

31. The following table displays the relative performance across all the reports, by indicating, for each section, the percentage of chapters judged to be of good quality, or to contain material or serious deficiencies.

article image

32. Annexes 2-4 contain summaries of the basis upon which the panel considered there to be deficiencies within particular sections of the reports. As referenced in the charts, the findings have been color-coded to indicate those which (individually or in combination) were regarded as either materially or seriously deficient in terms of meeting the criteria established by the panel’s terms of reference. As instructed, the panel has not disclosed the names of the countries to which the reports apply.

33. While only three of the sample of 23 reports were considered to be of an acceptable quality across all sections of all chapters, the number that contained no more than two sections with serious deficiencies in either of the key sections on description/analysis and recommendations increases to 14 and 16, respectively. The ratings section gave rise consistently to greater problems, with under half of the reports achieving an acceptable standard in this area. However, as explained in the relevant section of this report, the assessment of the quality of the ratings posed particular challenges.

34. With respect to the description/analysis and recommendations sections, over 80% of the chapters across all reports were of a broadly acceptable standard (i.e. of high or medium quality). Generally, the chapters on the FIU, law enforcement, international co-operation, general framework, record-keeping and STRs showed greater consistency of quality than other chapters. There was typically less satisfactory performance through most of the chapters dealing with financial sector preventive measures, particularly with respect to ongoing monitoring of accounts, integrity standards and supervisory co-operation. There was, however, a significant divergence in the relative quality of the different sections of each chapter (both across the board and within individual reports), and this is analyzed in detail in the following sections of this report.

35. The coverage of the criteria relevant primarily to terrorist financing was generally of a poorer quality than of those relating to money laundering. This factor arises mostly in respect of the chapters on the legal framework, which address the various UN Resolutions, and those on the FIU and STRs. In several reports there was only passing reference to terrorist financing, and there was very little effective analysis of the situation in the jurisdictions, especially in those reports completed within the earlier part of the pilot project. In some cases it was difficult to identify whether the absence of any reference to this issue arose because the financing of terrorism had not been addressed specifically within the jurisdiction (e.g. through the enactment of appropriate legislative measures), or whether the assessors had simply not focused on the matter24. The panel took the view that where the authorities, themselves, had not addressed the matter, the assessors should, nonetheless, have made explicit reference to this fact in all the relevant sections of the report, and that failure to do so impacted the quality of the overall assessment.

Description and analysis sections

36. The panel was asked to address six questions: does the description provide sufficient information to support the analysis and the assessment rating; are all substantive points raised by each of the criteria of the methodology addressed in the description; does the description cover all financial institutions required to be covered by the methodology; did the assessors consider the implementation of the laws and regulations; are the areas of weakness clearly and fully described; and is an analysis of effectiveness included?

37. As indicated, the quality of the description and analysis section essentially defines the overall quality of the chapter, since it provides the only basis on which to judge whether the recommendations appear logical and complete, and the ratings appropriate. Although this is created as a single section in the report template, the description and analysis are, in fact, two discrete components, one providing simply the factual element of what exists in law and practice, the other supposedly providing an expert analysis of the strengths and weaknesses of the system. A comprehensive description without the accompanying analysis would still be considered to fall short of what is necessary to deliver a quality assessment.

38. In reviewing this section, in particular, the panel came across some difficulties that clearly arose as a result of the translation of certain reports from the original language in which they were prepared25. This was reflected, in part, by language that was difficult to penetrate and, in part, by the use of terminology that did not match the accepted English usage as reflected in the FATF Recommendations and the methodology. Wherever possible, the panel has sought not to downgrade its assessment of these reports, but in some instances it could only conclude that the impenetrable nature of the language reflected a lack of clarity of expression on the part of the assessor in the original language. The use of unfamiliar English terminology has also been ignored, except where it has left considerable doubt as to what the assessor was actually addressing.

39. Eight reports contained no serious deficiencies in this section across all chapters, and a further six had no more than two sections with serious deficiencies. Across the spectrum of the reports approximately 85% of the chapters were considered to contain a section providing a generally acceptable description and analysis relative to the specified criteria, with 43% being assessed as good quality. Broadly, the chapters addressing the FIU, law enforcement, general framework, record-keeping and internal controls were more consistently of good quality than other chapters.

40. Where deficiencies existed, the panel identified eight grounds for considering that the section fell short of what might reasonably be required. These, together with the frequency with which they occurred, are listed in the following table.

article image

41. In considering these data, it is important to note that most reports that were identified as having material or serious deficiencies exhibited more than one of the above characteristics. In many cases one deficiency might be the cause of another. For example, a lack of detail in the description or an inadequate analysis of the facts might logically result in a poor discussion of the effectiveness of, and weaknesses in, the system. The most striking feature is the prevalence of the failure to address adequately the core criteria within the methodology. In fact, only three of the twenty-three reports in the sample were considered to have addressed these criteria fully within this section across all chapters, with five of the reports failing to cover the ground fully in 50% or more of the chapters. Those chapters addressing confiscation, international co-operation, customer identification and STRs were especially badly affected. This clearly has significant implications for the potential accuracy of the ratings in these chapters and for the completeness of the recommendations needed to address any weaknesses.

42. While the above table may appear to imply that the reports contained a relatively thorough discussion of weaknesses in national systems, it should be noted that the adequacy of that discussion was restricted, quite naturally, to those issues specifically identified in the descriptive sections in question. Therefore, where those descriptive sections were deficient in their coverage, there was frequently little or no basis for identifying the true extent of any weaknesses, with the result that the discussion of the latter (or absence thereof) may not have appeared deficient in this context.

43. Generally, the highest incidence of deficiencies occurred within the chapters covering confiscation, customer identification, ongoing monitoring of accounts, STRs, enforcement and supervisory co-operation. However, in terms of the seriousness of the deficiencies, the principal problem occurred in three chapters: customer identification, enforcement and supervisory co-operation. The lowest incidence of deficiencies was noted in the discussion of law enforcement, general framework, record-keeping and internal controls, although the most consistent delivery of good quality descriptions and analyses was reserved for the chapters on the FIU, law enforcement, general framework and internal controls.

44. A common cause of the deficiencies in the coverage of the legal issues was the failure to address fully (or even at all) the criteria on terrorist financing, while a lack of supporting statistics frequently weakened the discussions on STRs and the operation of the FIU. Within the chapters on preventive measures, there was a tendency to focus largely (or even exclusively) on the banking sector and to make only passing reference to the other core components of the financial industry. Rarely was it the case that the reports contained any discussion of key sectors beyond banking, insurance or securities. Even with respect to the banking sector, a high proportion of the reports failed to address fully the sector-specific criteria, and in several cases they were simply ignored in their entirety. These criteria are of particular relevance to the completeness of the chapters dealing with customer identification, ongoing monitoring of accounts and internal controls.

45. The panel faced common problems in identifying the source and reliability of the information and data provided in the descriptive sections. In some cases the reports relied on information provided by the authorities without there being any evidence that it had been verified by the assessors. While this is clearly acceptable in certain circumstances (e.g. in relation to statistical data), the same does not apply with respect to any qualitative statements. In more than one case the reports simply reflected such statements made by the authorities, and offered no attempt to provide further analysis, resulting in uncertainty as to whether the assessor accepted the statement as accurate or whether he/she had failed to test the facts properly.

46. In some cases assessors referred to the difficulty in obtaining information, evidence or statistics as the reason for not covering one or more topics. This may have been caused by the time constraints under which evaluation teams worked on site or from insufficient co-operation from the authorities, but it resulted in reports that contained serious gaps in the core information, thereby reducing their overall value. In some such cases, the panel noted that the reports contained (apparently speculative) ratings even though the basic underlying information had not been obtained.

47. While it is clearly the intention that each chapter should be as self-contained as possible so that the reader may gain an understanding of the particular issue without having to read the entire report, there were several examples where information that was highly relevant to one chapter was contained elsewhere in the report without, at a minimum, any cross-reference being made. Where this occurred, it most commonly related to (a) the discussion of criminalization for which key information might be contained only in the introductory section of the report; (b) the general coverage of preventive measures in the financial sector, where the failure to cover certain criteria could occasionally be explained by little more than a passing reference to the structure of the financial system contained in the introduction or general framework sections of the report; and (c) terrorist financing, where again a single reference in one part of the report seems to have been considered as sufficient to justify the absence of any further discussion.

48. A very common omission was not to provide sufficient information on the composition and activities of the financial sector, which is required to provide the context in which the preventive measures were being described. This occasionally affected the ability to determine whether all the key components of the financial sector had been considered in the analysis, or whether criteria (especially sector-specific criteria) that had been omitted from the discussion were justifiably so treated on the basis that the sector was unimportant. Such background material should reasonably have been provided at the start of the report, with cross-references in the relevant chapters.

49. The analysis of effectiveness of the laws and other measures posed a particular problem27. In nearly all the reports a discrete sub-section was contained on this issue, in line with the standard template. However, the assessors’ apparent understanding of what was required in this section varied considerably within and between groups of reports. As the table in the above section indicates, approximately one-fifth of the chapters were considered not to provide an adequate analysis of the effectiveness of the measures described. This problem was particularly prevalent in the chapters addressing criminalization, confiscation, STRs, international co-operation and supervisory co-operation.

50. In a number of cases the sub-section was used either to add additional facts to those addressed in the descriptive part of the chapter, or simply to repeat the same information, either near-verbatim or in paraphrased form. Often where attempts were made to provide a true analysis, it tended to be short or superficial, without supporting evidence in the form of data or other material that might reasonably be expected to be available. For example, basic indicators and statistics rarely gave sufficiently detailed information to allow a serious analysis of the effectiveness of the system. When it was available, the statistical material was not consistently followed up by conclusions as to the real effectiveness of the components of the AML/CFT system under consideration (e.g. the STR regime, the FIU and law enforcement). In addition, there was very rarely any attempt to offer additional qualitative analysis of the indicators and statistics provided.

51. In the vast majority of cases it was impossible to determine the extent, if any, to which effectiveness was factored into the overall rating. However, in one instance, a breakdown was provided as between, on the one hand, legal and institutional compliance, and, on the other hand, implementation issues. From the final rating it was apparent that little, if any, weight was given to implementation.

52. The following provides a summary review of the performance of each assessment body in completing the description and analysis section of the reports.

  • APG - Only two reports were reviewed. This section in both reports was generally of a good quality on the legal and law enforcement issues, but tended to be less consistent when discussing the preventive measures. However, most chapters on the preventive measures were still of a broadly acceptable standard, but there were several problems when addressing integrity standards, enforcement and supervisory co-operation. This pattern was broadly similar across both reports. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core and sector-specific criteria, and inadequate general analysis.

  • CFATF - The reports showed a good degree of consistency across all the reports, with a very limited number of serious deficiencies. Generally consistent good quality was achieved on the law enforcement issues and some of the preventive measures (internal controls, integrity standards and supervisory co-operation). Typically, the performance on the legal issues was less satisfactory. One report was marginally of a lower quality than the other two. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core criteria, and inadequate analysis of effectiveness.

  • ESAAMLG - Only one report was available for review and so no pattern could be determined. The description and analysis was acceptable in all except two of the chapters, but was of good quality in only two. The principal deficiency noted was a failure to provide sufficient detail in the description.

  • FATF - The section was of an acceptable standard across all reports, with a high incidence of good quality in the majority of the legal and law enforcement chapters, and some of those addressing the preventive measures (especially customer identification and ongoing monitoring of accounts). Poorer performance was noted on the coverage of STRs, internal controls, integrity standards and supervisory cooperation in two of the three reports. Generally, one report was of a consistently higher quality than the other two. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core criteria, an inadequate analysis of effectiveness and failure to cover all relevant financial institutions.

  • GAFISUD28 - The section was of a good quality in a limited number of chapters addressing law enforcement and preventive measures, and was acceptable in coverage of the FIU. The performance was generally poorer in addressing legal issues, and in addressing the preventive measures in two of the reports. All the reports had a similar distribution of high, medium and low quality sections, but no particular pattern was discernible. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core and sector-specific criteria, a lack of adequate detail in the description and inadequate analysis of effectiveness.

  • MONEYVAL - The section was of an acceptable standard across all reports, with a high incidence of good quality chapters. There was consistently good performance on the chapters covering international co-operation, ongoing monitoring of accounts, record-keeping and STRs. Performance was marginally weaker on the legal chapters, general framework, enforcement and supervisory co-operation. The pattern was similar across all three reports. Where deficiencies arose in the reports, the principal one noted was a failure to provide sufficient detail in the description.

  • IMF - The section was of an acceptable standard across all reports, with a high incidence of good quality chapters covering legal and law enforcement issues. Performance was marginally weaker on the preventive measures. One report had a significantly higher rate of good quality chapters than the other two. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core and sector-specific criteria and failure to cover all relevant financial institutions.

  • World Bank - The section was generally of an acceptable standard for the discussion of legal issues, the FIU and much of the preventive measures, with consistently good performance on the general framework chapter. In two of the three reports there were particular weaknesses in the discussion of international co-operation and ongoing monitoring of accounts. One report was of a consistently higher quality than the other two. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core and sector-specific criteria, and a failure to provide sufficient detail in the description.

  • Joint IMF/World Bank - Only two reports were available. One report was broadly acceptable across the range of chapters, with good quality performance on law enforcement issues and the majority of the preventive measures. The second report was of a significantly lower quality, achieving a broadly acceptable standard in most of the preventive measures chapters, but failing to achieve a good quality in any chapter throughout the report. Where deficiencies arose in the reports, the most common noted were a failure to address fully the core and sector-specific criteria, a failure to provide sufficient detail and an inadequate analysis of effectiveness.

Recommendations sections

53. The panel was asked simply to determine whether the recommendations offered in the reports addressed the weaknesses that had been identified. It was specifically requested not to consider whether the recommendations appeared reasonable and realistic in relation to the facts stated.

54. The panel considers it important to note that, as a consequence of its terms of reference, its analysis of the adequacy of the recommendations section of the reports must be interpreted with great care for two main reasons29. First, where the description and analysis section of any chapter has been identified as seriously deficient, the recommendations offered may still be regarded as entirely appropriate provided that they can be tracked to the underlying information, however poor. There is no means to determine whether the recommendations address the actual weaknesses that might have been identified from a more thorough description and analysis.

55. Second, approximately 7% of the chapters contained no recommendations. These all fell within the chapters on preventive measures, but there was no apparent reason why this should be the case. The absence of recommendations may have been either because no weaknesses were identified during a thorough analysis (a minority of cases), or because the assessor had simply not addressed the issues properly or fully and, therefore, had no basis for making a recommendation (a majority of cases). In both these scenarios the panel has simply noted the absence of recommendations and has made no further comment in its assessment. However, it is significant that in the vast majority of cases, the absence of recommendations occurred in conjunction with descriptions that fell short of the good quality threshold, thereby leaving much uncertainty as to whether there were issues on which recommendations should have been provided. Only in those cases where the description and analysis clearly identified a weakness, and this was not followed by a relevant recommendation, could the panel register a deficiency.

56. Across the spectrum of the reports, approximately half of the chapters were considered to be of good quality and to contain recommendations that accurately reflected, or could be fully justified by, the description and analysis. Approximately another one-third of the chapters contained deficiencies that were not considered to undermine the overall value of the section. Nine reports contained no serious deficiencies within this section of any of the chapters, and a further seven reports contained no more than two sections with such deficiencies.

57. The panel identified four main grounds for considering completion of this section of a report to contain deficiencies. These are listed in the following table, together with an indication of the frequency of their occurrence.

article image

58. As indicated, the most common deficiency related to the failure to offer recommendations that properly addressed the weaknesses identified by the description and analysis. This basically involved a fault of omission in not explicitly providing potential solutions to the problems that the assessor believed to exist. The second main problem was the practice of making very general recommendations or ones that were so vague that it was not entirely clear what the authorities were expected to do in response, or whether the recommendation actually tied in with a weakness identified in the report. Thirdly, the panel identified a significant number of cases where the recommendations were not supported by any text in the description and analysis section. While the recommendations might have been perfectly sound and relevant in the context of the jurisdiction, there was no basis within the report on which to understand why the recommendation was being offered. Finally, in a small, but not insignificant number of cases the panel found that the recommendations appeared to conflict with the analysis, either by seeking to address something that had been identified as perfectly satisfactory in the analysis, or by proposing something that potentially would not have the effect of addressing the weakness that had been identified. The panel has also grouped within this last category a very small number of cases where the recommendations were not contained in the relevant section of the report, but were included without prominence in text elsewhere, making them very difficult to identify and often difficult to understand their intent.

59. The three main categories of deficiency were spread fairly evenly across the chapters that were affected, although the incidence of problems on the chapters covering criminalization, confiscation, customer identification and supervisory co-operation was generally higher than for the others. However, in terms of the seriousness of the deficiencies (i.e. those that were considered to undermine the value of the section), the most persistent problems appear to have occurred in the supervisory co-operation chapters. The lowest incidence of deficiencies occurred in the FIU and record-keeping chapters. Generally, those chapters dealing with the FIU, law enforcement, international co-operation, ongoing monitoring of accounts, record-keeping and STRs revealed a consistently good standard in matching the recommendations to the description and analysis.

60. The panel noted one issue that superficially might appear minor, but which can give cause for confusion in the use of the reports by the authorities, in particular. While each chapter contained a section listing the recommendations, there was also invariably a summary table of recommendations attached as an annex. In some cases the summary table did not contain all the recommendations made in the report, and in others the text of the summary was not an accurate reflection of what was stated in the main body of the report. In none of the cases was there any indication to suggest other than that the summary reflected fully and accurately the sum of the recommendations made elsewhere. Since it is likely that the summary will often be used as a checklist of action to be taken, it is clearly essential that it accurately reflects what the assessors consider as necessary to remedy the weaknesses that have been identified.

61. The following provides a summary review of the performance of each assessment body in the recommendations section of the reports.

  • APG - Only two reports were reviewed. This section was generally of an acceptable standard across all chapters, with one report being of a consistently higher standard on law enforcement issues, and the other performing more strongly on the legal and preventive measures chapters. Both reports were of a good quality on the FIU and international co-operation chapters. No recommendations were offered on approximately one-third of the chapters addressing the preventive measures. The principal deficiency noted (appearing in one report only) was the provision of recommendations not supported by any description or analysis.

  • CFATF - The sections were of an acceptable standard across all chapters of two reports, which were of generally good quality in the majority of the preventives measures chapters. The third report was noticeably weaker on the preventive measures, with serious deficiencies in just under one-half of the relevant chapters. All the reports achieved an acceptable standard in addressing the legal issues, but without reaching the good quality threshold. Where deficiencies arose in the reports, the most common noted were the failure to provide recommendations to address adequately the weaknesses identified, and the provision of recommendations that were too general or vague.

  • ESAAMLG - Only one report was available for review. This achieved an acceptable standard in all but two chapters, with about one-half being considered of good quality. The most common deficiencies noted were the failure to provide recommendations to address adequately the weaknesses identified, and the provision of recommendations that were too general or vague.

  • FATF - One report was of a consistently good quality across nearly all the chapters, while the other two were of a consistently acceptable standard on the legal and law enforcement issues, but performed less well on the preventive measures. No recommendations were offered on approximately one-third of the chapters addressing the preventive measures. Where deficiencies arose in the reports, the most common noted were the failure to provide recommendations to address adequately the weaknesses identified, and the provision of recommendations that were too general or vague.

  • GAFISUD - An acceptable standard was achieved in about two-thirds of the chapters, with no particular pattern of strengths. Performance was weaker on the legal issues and in several of the chapters on preventive measures (especially supervisory cooperation). Where deficiencies arose in the reports, the most common noted were the failure to provide recommendations to address adequately the weaknesses identified, and the provision of recommendations that were too general or vague.

  • MONEYVAL - The reports were generally of a consistently good quality across the legal and law enforcement chapters and the majority of the chapters on preventive measures. They remained of an acceptable standard in most of the other chapters on preventive measure, but revealed greater deficiencies in those on integrity standards and supervisory co-operation. There was a low incidence of deficiencies, but the most common noted was the failure to provide recommendations to address adequately the weaknesses identified.

  • IMF - An acceptable standard was achieved across the board, with good quality sections in the majority of the chapters. Four of the 27 sections addressing preventive measures contained no recommendations. There was a low incidence of deficiencies, but the most common noted was the failure to provide recommendations to address adequately the weaknesses identified.

  • World Bank - One report was of a good quality in all but two chapters. The others were of consistently good quality in three of the chapters on preventive measures (general framework, record-keeping and enforcement) and achieved an acceptable standard on the legal issues and two more of the chapters on preventive measures (customer identification and STRs). A significant minority of the chapters was of a lower quality, but no particular patterns were detected. Where deficiencies arose in the reports, the most common noted was the failure to provide recommendations to address adequately the weaknesses identified.

  • Joint IMF/World Bank - Only two reports were available. One was of an acceptable standard across the board (except for one chapter), with good quality coverage in about two-thirds of the chapters, mostly in addressing legal and law enforcement issues. The second report was of an acceptable standard in approximately half the chapters, but with particular weaknesses in the legal and law enforcement chapters. Where deficiencies arose in the reports, the most common noted were the failure to provide recommendations to address adequately the weaknesses identified, and the provision of recommendations that were too general or vague.

Ratings sections

62. The panel was asked to address two questions only: is there any mismatch between the rating and the written findings; and do the ratings reflect only the laws and measures in place at the time of the assessment?

63. The review of these sections of the reports has to be accompanied by a similar health warning to that supplied with the recommendations sections. In undertaking its review, the panel has been entirely dependent on what is contained in the reports, since it has not had access to any of the underlying material available to the assessors. As a result, the panel could only determine whether there was a logical progression from the description, analysis and recommendations to the ratings. It could not identify whether the rating was an accurate reflection of the situation in practice in a jurisdiction, in particular in circumstances where the description and analysis might be weak or incomplete. Moreover, in answering the specific questions, it has been perfectly feasible for a rating to be considered entirely appropriate, within the context of this review, in situations where the description and analysis have been judged to be seriously deficient. The incidence of deficiencies highlighted above in the discussion on the descriptions and analysis sections indicates that there will be a significant number of cases where there may well have been a major discrepancy between the rating derived from analysis and the true position in the jurisdiction. On the other hand, the deficiencies noted by the panel do not necessarily imply that the rating is incorrect relative to the actual situation in the jurisdiction.

64. On the issue of whether the ratings took account only of the laws and measures in place at the time of the assessment, it is important to note that, once again, the panel was entirely dependent upon the information contained in the report. In some cases the reports actually indicated that assessors had given consideration to draft laws or to measures that had not yet been implemented, but where the report was silent on such matters, the panel could only assume that everything under discussion was in full force at the time of the assessment.

65. Finally, it has to be recalled that, for the most part under the 2002 methodology, the ratings for the FATF 40 + 8 Recommendations were produced as a composite from the criteria scattered throughout the methodology30. In the case of the individual criteria, most, but not all, were considered to have implications for specific Recommendations, and these were subsequently “mapped” to produce the composite rating that was reflected in an annex to the report. The panel has been able only to review whether the notional rating given in respect of the individual criteria within each chapter appeared to derive logically from the written findings. It had no basis from which to determine whether the composite ratings in the annex accurately reflected any weighting that the assessor may have given to the various criteria, since this was not recorded in the reports.

66. Across the spectrum of the reports, just under half of the chapters were considered to contain ratings that accurately reflected the situation portrayed by the description, analysis and recommendations, whatever the quality might be of the preceding sections; while approximately another one-third were of an acceptable standard as they contained deficiencies that did not undermine the value of the section31. However, only four reports were considered to be free of serious deficiencies across all chapters, with another six containing no more than two chapters with serious deficiencies. The panel has identified four categories of deficiency in the ratings section. These are listed in the following table, together with an indication of their occurrence.

article image

67. As indicated, the panel was able to draw a distinction between those situations where there was an obvious mismatch between the written findings and the ratings, and those where there was a lack of clear evidence to support the rating. The former might be characterized by a chapter containing a comprehensive description and analysis from which the conclusions drawn about the rating appeared to conflict with the text. The latter typically involved a situation where the description and analysis addressed relevant issues, but with a lack of sufficient detail from which to determine whether the rating was fully justified. A third category involved ratings that were completely unsupported by any written findings, and where it was impossible to know even whether the assessor had reviewed the issues relevant to the rating. Finally, there were a number of cases where no rating had been entered in the relevant chapter of the report, but where a composite rating appeared in the summary table at the end of the report32. These cases are distinct from those where there is no rating provided by some assessing groups under the enforcement chapter because of an error in the template (see discussion earlier in this report).

68. Generally, the highest incidence of deficiencies occurred in the chapters dealing with customer identification, ongoing monitoring of accounts and STRs, while the lowest incidence was within the chapters covering record-keeping, integrity standards and general framework33. However, in terms of the degree of seriousness of the deficiencies, those chapters on, customer identification, ongoing monitoring, integrity standards and supervisory co-operation fared particularly badly, while there was a relatively high rate of good quality performance on international co-operation, record-keeping and integrity standards. The reference to the chapter on integrity standards within both the worst and best performers can be explained by the fact that where a deficiency occurred it tended nearly always to be of a serious nature, i.e. the rating was either clearly supported by the analysis or it was clearly not.

69. There was uneven treatment across the reports in the way that the ratings were justified. In some cases, the ratings were simply stated in isolation, and it was necessary to make an assumption from the text on the significance and weighting that the assessor attached to the various criteria that went into the rating. In other cases, the rating section contained a summary of the factors that the assessor took into account when making the decision. While these summaries may not always have accurately reflected the sum total of the analysis that seemed relevant to the determination of the rating, they provided a useful guide to what the assessors themselves considered to be the key elements.

70. Generally, where there was a clear mismatch between the written findings and the rating, the failing was to record a higher rating than appeared to be justified. In only a very small minority of cases did the panel identify circumstances in which the rating was considered to be unduly harsh relative to the description and analysis.

71. Two of the reports reviewed were of federal states where the legal and regulatory frameworks governing AML/CFT in each component jurisdiction were different. In one case the differences were significant. While these complexities were adequately addressed in the description, analysis and recommendations sections by simply treating each part of the federation separately, the ratings, on the other hand, were provided in the form of a composite for the federation. In several instances this resulted in major discrepancies between the description relevant to one part of the federation and the composite rating, and it was impossible to tell how the assessor may have determined the respective weighting that applied to the composite. It appears that, in practice, the tendency may have been to set the rating on the basis of the jurisdiction that fared better in the assessment34.

72. As indicated above, there was a limit to the extent to which the panel could determine whether the ratings reflected only the laws and practices in place at the time of the assessment. Generally, there was no information contained within the reports to identify the assessors’ policy with respect to cut-off dates, i.e. whether they would take into account any measures that might have been implemented between the assessment mission and the finalization of the report. In only a small number of cases was it apparent from the text of the report that the ratings had been based on draft legislation or legislation that had not yet been fully implemented35. A further (minority) variant involved rating on the basis of laws in force at the time of the onsite visit, while noting the impact of legislation due to enter into force at some time thereafter. Yet another variation occurred in one instance where the work of the IAE took place a considerable time after the core assessment was undertaken and following the enactment of additional legislation. This resulted in confusion at to the state-of-play at any one time, and a lack of consistency and clarity in the application of a policy on the cut-off date.

73. Another uncertainty relates to the treatment of effectiveness in the ratings. As indicated in the review of the description and analysis sections, the discussion of effectiveness of laws and other measures was patchy. Only most exceptionally did the ratings section indicate whether the analysis of effectiveness had been taken into account when arriving at the rating, and nowhere was there any indication as to the extent that it might have influenced the rating, particularly in circumstances where the letter of the law might have appeared well structured. In one report a rating was provided against not only the legal and institutional arrangements, but also the effectiveness of implementation in respect of each criterion. These appeared then to be combined into a composite rating for the FATF 40+8, but it was unclear on what basis this was accomplished.

74. The following provides a summary review of the performance of each assessment body in completing the ratings section of the reports.

  • APG - Only two reports were reviewed. The legal and law enforcement chapters were generally of a good quality, but performance was more mixed when addressing the preventive measures. There were particular deficiencies in the chapters on customer identification, ongoing monitoring of accounts and supervisory cooperation. There were no overarching causes for the deficiencies, but the general grounds noted were a mismatch between the ratings and the description and analysis, a lack of evidence to support the rating and the failure to provide a rating where appropriate.

  • CFATF - An acceptable standard was achieved across the legal and law enforcement chapters of all three reports, and good quality sections were applied to several of the chapters on preventive measures within two of the reports. One report was noticeably weaker than the others in addressing the preventive measures, with serious deficiencies across several chapters. There was a consistent practice of not providing a rating in the enforcement chapter. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

  • ESAAMLG - Only one report was available for review. This revealed no particular pattern of performance. There were strengths and weaknesses in all three areas of description/analysis, recommendations and ratings. The principal deficiency was a mismatch between the ratings and the description and analysis.

  • FATF - An acceptable standard was achieved across the legal and law enforcement chapters of all three reports, with a high incidence of good quality sections. One report was mostly of a good quality in addressing the preventive measures, but the other two presented more difficulties in this area (specifically on STRs, integrity standards, enforcement and supervisory co-operation. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

  • GAFISUD - Two of the reports were mostly of a good quality across the chapters on preventive measures (with the exception of the chapter on ongoing monitoring of accounts), but the third showed a number of deficiencies in this area (customer identification, ongoing monitoring and internal controls). All the reports performed less well when addressing the legal issues. There was a consistent practice of not providing a rating in the enforcement chapter. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

  • MONEYVAL - The chapters were of consistently good quality when addressing the legal and law enforcement issues. Performance was more varied in the chapters on the preventive measures, with approximately two-thirds achieving an acceptable standard. There were weaknesses in the chapters on the general framework, ongoing monitoring of accounts, integrity standards and supervisory co-operation. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

  • IMF - An acceptable standard was achieved across the board, with the exception of one chapter in one report. There was a high incidence of good quality sections. There was a consistent practice of not providing a rating in the enforcement section. Few deficiencies were identified, but the principal one noted was a mismatch between the ratings and the description and analysis.

  • World Bank - An acceptable standard was achieved in the chapters addressing legal issues, but performance was more varied on law enforcement and preventive measures. No particular pattern was noted, but one report was significantly better in overall quality than the other two in the chapters on law enforcement and preventive measures. There was a consistent practice of not providing a rating in the enforcement section. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

  • Joint IMF/World Bank - Only two reports were available for review. One report was of an acceptable standard across the board. The second was generally of an acceptable standard in the chapters addressing the preventive measures, but contained a number of deficiencies in the legal and law enforcement chapters. There was a consistent practice of not providing a rating in the enforcement section. Where deficiencies arose in the reports, the most common noted were a mismatch between the ratings and the description and analysis, and a lack of evidence to support the rating.

Review of Consistency

Overview

75. In considering the overall consistency of assessments the panel was asked within its terms of reference to identify whether the reports were reasonably uniform in their format, content and quality, or whether there were any significant variations between individual reports or between groups of reports. However, in the attached instructions the panel was further asked to seek to identify whether reports have the same ratings for comparable levels of deficiency.

76. The panel believes that the preceding discussion in this report clearly indicates that there were marked variations in the quality of the assessments both within and between individual reports, and no further detailed analysis of the consistency of quality is offered in this part of the report. In summary, most assessment reports exhibited a range of high, medium and low quality chapters, with a similar range of quality within the component sections. Moreover, with the limited exceptions mentioned in the detailed analysis above, there were generally few trends across the range of reports to suggest that the lack of consistency in the quality was the result of extraneous factors associated with, for instance, the structure of the methodology36. A minority of assessor groups clearly presented a product of more consistent quality across the board, while most other groups delivered reports of an acceptable quality in most, but not all, instances. Only one grouping tended to show relatively widespread problems with the overall quality of its reports.

Format of reports

77. Annex 5 records some observations on areas in which the format and content of the reports may have varied both within any one group and between groups. In summary, this shows that there were a number of relatively minor variations in the format of the reports, both within and between groups, but the panel does not believe that these variations, in themselves, will have affected the quality of the underlying description and analysis.

78. During the early stages of the pilot program a template was developed for the presentation of the detailed assessment report37. While the basic component parts of this template have been widely used in the sample of reports reviewed by the panel, there have been variations. Some groups adopted the boxed template which allowed for either a narrative or note-type style of presentation, while others retained the overall structure, but dispensed with the boxed format and used a normal narrative style presented in standard paragraphs. However, the same format was not always consistently used for reports prepared by three of the groups, with two using a combination of the main formats, and the third adopting a standard template for one report, but an entirely different criterion-by-criterion template for another38. There was no clear explanation as to why different formats might have been used for reports prepared by the same group.

79. Generally, the overall structure of the reports was similar, but with some variations that affected the ease of use of the reports. A minority contained a table of contents and a list of acronyms, both of which are important aids to the reader of reports of this length. In some cases the introductory section of the report contained purely factual information on the legal, institutional and financial framework of the country, but in a minority of cases, this section extended almost to a form of executive summary. In other cases, the introduction failed to provide an adequate description of the financial sector, instead placing this at a later stage in the report where its use as a means of referral was less easy.

80. While most of the reports contained a section entitled “Analysis of Effectiveness”, the use to which this was put varied among the reports. Many attempted some form of assessment of the effectiveness of the measures described, although the quality and nature of the analysis varied substantially. Other reports used this section simply to expand on the factual information contained in the descriptive section of the chapter, or simply to repeat information already contained within the description. In two reports (from separate groups) a different terminology was used for the chapter heading (“analysis” or “results achieved”), and it was unclear whether these were intended to address the issue of effectiveness, since neither did so in practice.

81. All the reports contained a set of summary tables as an attachment. These consistently included a schedule of ratings against the FATF 40 + 8 and a recommended action plan. In all but one of the reports the schedule of ratings had been completed Most groups also included a summary of the effectiveness of the measures implemented, but one group substituted this with a summary of “problem areas to be resolved”, while another employed both styles in separate reports. The summary headings suggest that the tables had different objectives, but this is unclear from the contents.

82. The length of the reports varied significantly. In some cases this could be explained by the nature of the jurisdiction being assessed (e.g. when the component parts of a federation had to be considered separately), but this was far from always the explanation. Clearly, the quality and depth of the description and analysis will be reflected, in part, in the length. For instance, in at least one report, a high degree of fine detail was compounded by a tendency towards repetition resulting in a report of some 140 pages excluding the attachments. By way of contrast, another report adopted a minimalist approach to the description and analysis, resulting in just 28 pages in total. The panel also noted that the extent of relevant detail provided in several reports varied significantly between chapters, although this may not always have been a reflection of the quality of the product.

Ratings

83. The supplementary request to the panel to identify whether reports have the same ratings for comparable levels of weaknesses in national systems has presented a serious challenge. In principle, given the high rate of deficiencies noted in relation to the ratings sections of the reports, it might be expected that there would be a significant number of cases where inconsistencies in the application of ratings could be identified. However, the situation in each jurisdiction almost invariably has some key differences from otherwise broadly similar circumstances elsewhere; the situation described in each report is usually presented differently; and, as indicated in the detailed analysis above, the quality of those descriptions varied enormously, thereby not providing a consistent basis on which to determine properly whether the underlying situations are equivalent. Therefore, it has been extremely difficult to identify reliable comparators that might form the basis for such a review.

84. However, the panel has been able to identify a limited number of examples where apparent discrepancies have arisen within a single group and between groups. It should be noted that, perhaps not surprisingly, in all the examples identified the panel had also considered that the quality of the rating relative to the description had fallen short of the good quality threshold in at least one of the comparators. The following box summarizes the cases identified by the panel.

Box: Examples of inconsistency of ratings in apparently comparable circumstances

Case 1: Treatment of SRVII in customer identification section within same group of reports.

Report A states: Institutions must keep proper records of fund transfers. However, the source of remittances is not checked. No specific guidelines have been issued for these types of transactions. (Rating given: Largely compliant)

Report B states: Paragraph 102 of the Guidance Notes states that in the case of electronic transfers, regulated businesses should retain records of payments made with sufficient detail to enable them to establish the identity of the remitting customer and as far as possible the identity of the ultimate recipient. (Rating given: Compliant)

Report C states: There are no legislative requirements for financial institutions, including money remitters to include accurate and meaningful originator information on funds transfers and related messages that should remain with the transfer or related message through the payment chain. (Rating given: Non-compliant)

Case 2: Treatment of R14 in Ongoing Monitoring section within same group of reports

Report A states: The existing rules do not specifically require financial institutions to pay special attention to complex, large unusual transactions or unusual patterns of transactions. Neither are there rules in place, which require an intensified monitoring for higher risk accounts. (Rating given: Non-compliant)

Report B states: There are no rules which specifically require banks or credit organizations to pay special attention to complex, large unusual transactions or unusual patterns of transactions. Neither are in place any rules, which require an intensified monitoring for higher risk accounts. (Rating given: Materially non-compliant)

In both reports the recommendations offered by the assessors are identical. Case 3: Treatment of R15 in STR section between two different groups

Report A states: Currently, there is no one centralized agency performing the function of a financial intelligence unit. Nevertheless, Section 20 of the [law] requires all banks and financial institutions to report all unusual or large transactions, which have no apparently genuine economic or lawful purpose and which in the bona fide professional judgment of the financial institution could constitute or be related to illegal or illicit activities, corruption or corrupt practices........There is no specific obligation on financial institutions to report terrorism-related suspicious transactions. (Rating given: Largely compliant)

Report B states: The FIU has not been established. According to Section 13(2) of the [law], financial institutions should report all suspicious operations to the “Supervisory Authority”.......Reporting of suspicious transactions is mandatory for all accountable institutions listed under Section 13 (1) and (2) of the [law]...... The [law] does not provide for the reporting of STRs relating to the FT. (Rating given: Materially non-compliant)

Case 4: Treatment R12 in Record-keeping section between two different groups

Report A states: Nonetheless, [the supervisory authority] needs to further develop onsite inspection procedures to ensure compliance with customer information, record-keeping and STR reporting requirements. (Rating given: Largely compliant)

Report B states: However, implementation of the requirement is open to question until all financial institutions are subject to regular onsite inspections. (Rating given: Compliant)

Commentary on Matters Arising from the Review

85. The preceding technical analysis of the quality and consistency of the assessment reports complies with the original terms of reference given to the panel. However, during the comments stage on this report, the panel was requested to extend its work to include, first, a commentary on matters that it thinks pertinent to an understanding of why the deficiencies might have arisen, and, second, an indication of any measures that might, to its knowledge, have already been taken to address some of the issues. The addition of this section was fully supported by the panel, since it was conscious that the pilot project was, by definition a learning process, and that significant improvements in procedures have been implemented since the introduction of the 2004 methodology. In offering the following comments, the panel would note that it has not undertaken any additional research into the practices of individual assessor bodies in completing its assessments, nor has it completed a review of any initiatives that might be in train. This commentary, therefore, is drawn entirely from the panel members’ personal experience of the global assessment programs.

86. The following points are not offered in any particular order of priority. Where the panel believes that relevant measures are being taken to address an issue, these are described in the subsequent shaded box.

  • The 2002 methodology was a composite document in which the criteria were drawn from a range of sources (FATF, Basel, IOSCO, IAIS), and also included some original material considered relevant to particular chapters. The criteria were rarely cast in language that directly related to that used in the 1996 FATF Forty Recommendations and the 2001 Special Recommendations on Terrorist Financing. Some of these criteria could be mapped quite simply to the Recommendations, but others could not (and, indeed, some of the criteria might have appeared of questionable relevance to the Recommendations to which they were linked). Since assessors were required ultimately to provide a rating against specific Recommendations, there may well have been a tendency to focus primarily on those criteria that could be mapped to the Recommendations, while giving others far less attention. This might particularly have been the fate of the sector-specific criteria, few, if any, of which were directly mapped to the Recommendations. The sector-specific criteria were also displaced from the core criteria for each chapter, perhaps leading assessors to consider that they were of only minor importance.

    The 2004 methodology draws its criteria only from the revised 2003 FATF Recommendations, and uses language that corresponds directly to that used in the Recommendations, glossary or interpretive notes. All the criteria are grouped with direct reference to the relevant Recommendation, so that it is clear to assessors the full range of issues that need to be addressed in considering compliance with any one Recommendation.

  • Many of the ratings were a composite, drawing on criteria scattered under different chapter headings. In theory, this involved the assessors in providing a rating in each chapter that simply reflected the “implications” of compliance with the criteria for the overall rating for the Recommendation to which they mapped. The final rating appearing in the summary table at the end of the assessment report would then represent something close to a weighted average of the “implications”. In practice, it appears that many assessors sought to ensure that the individual chapter ratings matched the composite in the summary table, thereby giving rise to occasional inconsistencies between the description and analysis in the chapter and the “implications” rating.

    The 2004 methodology requires assessors to provide only a single rating for each Recommendation. In addition, the definitions of the compliance ratings are tied directly to the rate of compliance with the underlying criteria.

  • It was rarely the case that the assessment reports addressed parts of the financial sector beyond banking, insurance and securities, and banking was almost invariably given dominant treatment. While this may have reflected the relative importance attributed to these sectors by the assessor, it may also have been a consequence of the lead given in the methodology by its partial reliance on Basel, IOSCO and IAIS standards. In addition, the introductory section of the methodology is very brief in its reference to other key sectors of the finance industry. It is also possible that assessors were heavily influenced by the concept of what constituted “macro-economically significant” sectors in determining the limit of their coverage. In the case of the Fund/Bank assessments, coverage of non-macro-relevant sector fell to the IAE, who, usually being a law enforcement expert, may not typically have had the expertise to complete this part of the work. These factors may have resulted in a very broad-brush approach being taken, and may not always have helped identify areas of greatest threat from money laundering or terrorist financing within a particular jurisdiction.

    The 2004 methodology focuses on the FATF standards, and while cross-references are made to the other relevant standards, the instructions to assessors make clear the range of financial institutions that need to be considered. In addition, the 2003 revision to the FATF Recommendations now focuses on the concept of the risk of money laundering, and the methodology requires assessors to consider all financial institutions (as defined), except to the extent that there is a proven low risk of money laundering or terrorist financing in any particular jurisdiction. Also, the fact that the Fund/Bank are now mandated to assess against the entire standard goes some way towards addressing the issue.

  • Some of the chapters in the methodology are clearly more complex than others and require a broader range of expertise. Some, such as those dealing with record-keeping and STRs, contain criteria that are of a largely factual nature, while others require much greater interpretation and judgment. In addition, some of the issues (e.g. the FIU) require a more specific focus of expertise from the assessor, but others (e.g. several of the preventive measures) require a good understanding of a broad range of practices and procedures across the financial sector. With respect to the legal issues, assessors must have knowledge of financial, criminal and international law. It clearly poses a challenge to obtain assessors who have such a range of expertise, and the reports may well reflect the focus of the individuals’ particular skills, rather than providing balanced coverage of all the matters relevant to the jurisdiction.

    With the introduction of the 2004 methodology, there was a realisation that more attention needed to be placed on assessor training. Several training seminars for assessors have already been undertaken over the last 12 months, involving almost all FSRBs. Work is now in hand to develop a common training package that will be available to all the bodies undertaking assessments. However, while this will assist assessors to understand more fully the assessment process and the underlying objectives of the criteria, it cannot deliver the levels of technical expertise that assessors need to have to undertake the work effectively. In part, the breadth of skills required to address some of the issues may be resolved by the increasing practice of having more than one financial sector expert on the assessment team. However, this has been driven largely by the volume of work involved in using the 2004 methodology, although it does provide an opportunity to help ensure that assessment teams include an appropriate breadth of skills relevant to the jurisdiction being assessed.

  • The panel noted that a significant number of the deficiencies were such that it would have expected them to have been identified through a basic quality control process. From the results of the review (in particular, when considering the consistency of quality) it would appear that some assessor bodies exercise greater central control over the process than others. There is a clear benefit in some form of central quality control procedure, but this has significant resource implications. However, where such a control is exercised, it is important that it represents more than a simple pro forma review, but is undertaken also on the basis of ensuring appropriate technical content.

  • The panel noted that the discussion of the effectiveness of the AML/CFT measures was often of indifferent quality in the reports, and it was frequently unclear what weight had been given to effectiveness in the ratings. Relatively few assessors appeared to have a clear perception as to what they should include in this section, and in many cases it was used simply to provide additional factual information, rather than a qualitative analysis. Most reports failed to use statistics to any great purpose in this area, especially when considering the effectiveness of the STR regime or the performance of the FIU or law enforcement regime. The absence of any guidance for assessors on this issue in the methodology may have been part of the cause of the relative weakness.

    The 2004 methodology contains some guidance to assessors on the measurement of effectiveness, and specific criteria have been included in relation to the maintenance of relevant statistics. However, it has been recognised by the assessor bodies that there remains a problem of consistency in how effectiveness is treated, particularly when it has to be factored into the ratings. It is understood that further work is to be undertaken by the assessor bodies on this issue.

  • Several reports lacked adequate detail in their description of legal and other measures in place. In a minority of cases an explanation was provided to the effect that the authorities had been unable to provide the necessary information, but in others no explanation was offered, leaving the panel to conclude that the assessors had not covered the ground fully, possibly because of a lack of time onsite to address the range of issues. In practice, this may not always have been the case, particularly in the context of developing countries where the structures may be less extensive than in more developed jurisdictions, and where it may have been considered unnecessary to address certain issues. In such cases, however, it would have been important for the assessors to state clearly the limitation of the information available to address the criteria, rather than simply to have remained silent. Related to this is the uncertainty in some reports about the value of information that has been provided to the assessors by the authorities. Some information may justifiably be taken at face value, but other (especially that of a qualitative nature) would require validation through the assessment process. This distinction was often not clear.

    The procedures developed in conjunction with the 2004 methodology appear to place greater emphasis than before on submission by the authorities of the advance questionnaire that is designed to elicit essential factual information. In principle, this would provide assessors with more time onsite to focus on filling the gaps and testing the validity of what has been provided, rather than having to focus simply on acquiring the basic information. It is understood that, in principle, some assessor bodies now take the position that an assessment may be postponed if the advance information has not been provided in a timely fashion.

  • The panel noted that the coverage of terrorist financing issues was generally of a lower quality than that of AML issues. This is, perhaps, not surprising given, first, the fact that this topic did not form part of the FATF mandate until October 2001; second, the limited amount of guidance available when the 2002 methodology was in use; and, third, the relative lack of practical experience in this complex area for many assessors, particularly those from most of the FSRBs.

    The development of the interpretive notes for the Special Recommendations now means that there is substantially more guidance available to assessors. In addition, the 2004 methodology provides far more explicitly for coverage of the SRs than was the case in its predecessor. However, the question of broader practical exposure to the subject in some of the FSRBs may remain an issue.

  • In several instances the panel was critical of chapters for which the relevant information was available elsewhere in the report, but not repeated in the chapter itself. This criticism was made partly because the terms of reference required the panel effectively to deconstruct the report into discrete chapters and sections, but also, in large part, because the panel believed that most readers would not review the entire report, but would dip into those sections that addressed a particular interest. Therefore, the failure to cover all the relevant information within a chapter (or to provide explicit cross-references to where the data might be found) would seriously detract from the value of the report. Associated with this was the need to be able to identify where relevant background information might be found to put the report into context. In some cases the panel was unable to understand easily (or at all) such basics as the structure of the financial sector or the relative importance of different institutions, either because such background was lacking, or because it was dispersed widely throughout the report.

    The report template accompanying the 2004 methodology has an improved structure for the introductory section of the report, requiring assessors to offer background information on the country, to provide an overview of the financial and other relevant sectors of the economy, and to describe the institutional framework for combating money laundering and terrorist financing.

  • The panel noted that the 2002 methodology provided little by way of guidance to the assessors in the use of the document or the construction of the resulting report. This may explain some of the inconsistencies (both in form and substance) within the reports, differences in interpretation of some of the criteria, and differences of procedures that may have impacted the overall quality of reports. It is undoubtedly the case that some assessor bodies developed their procedures, style and interpretation over the course of the pilot project (particularly in terms of the structure of the reports), but this experience was not necessarily shared with others. While this developmental approach is clearly beneficial, it apparently led to some inconsistencies within and between different groups.

    The 2004 methodology is accompanied by an assessor handbook, the structure of which has been adopted by all the assessor bodies. This handbook provides extensive guidance about the assessment process and methodology, and includes templates for the pre-assessment questionnaire and the assessment report. Training seminars will also involve alerting assessors to the need to follow rigorously the structure of the templates. All the assessor bodies have agreed the common framework, and a structure has been established (through the FATF Working Group on Evaluations and Implementation) to allow for exchanges of experience and the mutual resolution of interpretational, consistency and similar matters.

  • Generally, the incidence of deficiencies identified in the ratings section of the reports was higher than in the other parts. This may, in part, be explained by the fact that this was the first occasion on which this rating system had been used for assessments against the FATF standards, and, as a result, experience in making the decision between different levels of compliance was very limited. Also, in considering the appropriateness of the ratings the panel was able only to take a view on whether there appeared to be a logical progression from the description and analysis, through the relevant recommendations to the rating. Any apparent discrepancies were inevitably laid at the door of the assessment team. However, the panel is aware that for the FATF and FSRBs the final arbiter on the report, including the ratings, was the respective plenary body. There was no such “independent” body to which the IMF and World Bank submitted their reports. In some cases it is possible that the plenary may have taken a different position on the ratings from the assessment team (especially after interventions on the floor from the assessed jurisdiction), without necessarily challenging or requiring amendments to the descriptive section of the report. This may have resulted in apparent discrepancies that the assessors could not directly control.

  • An added difficulty when considering the ratings offered within each chapter (and subsequently in the summary table) was that relatively few of the reports contained summary descriptions of the basis upon which the ratings had been given. This made it very difficult for the panel to determine what factors the assessors might have taken into account. The practice of including relevant information developed within only a small minority of groups over the period of the pilot project.

    The 2004 methodology now requires assessors to provide a summary of the factors that underlie each of the ratings awarded.

  • The panel was critical of a number of chapters where it considered the recommendations to be so general or vague as to bring into question what it was that the authorities might be expected to do in response. The panel took the view that the objective of the recommendations should be to provide practical guidance to the authorities as to what measures would be necessary to remedy a weakness, rather than simply being used as a mechanism to reinforce the message that a weakness exists.

  • The review of the consistency of the format of the report clearly indicated a number of variations both within and between assessor bodies. For the most part, the panel did not consider that these variations, in themselves, affected the relative quality of the reports, but they did make comparisons of the scope and conclusions of the assessments more difficult. Outside the confines of this review, such difficulties might affect the readers’ relative perception of jurisdictions.

    The 2004 methodology is accompanied by a standard report template that has been adopted by all assessor bodies. This development is particularly helpful in view of the decisions made by the FATF and some of the FSRBs to publish their detailed assessments, as this will make it easier for the reader to navigate through, and compare more directly, the reports.

  • Some of the reports provided to the panel were in translation from the language in which they were originally prepared or adopted by the relevant body. As indicated in the main body of the report, the panel sought not to let the quality of the translation influence its review even where the terminology was unclear. However, as a general principle, it considers that, if translations are in future to be made available to a wider audience (e.g. through a policy of publication), it will be important to ensure, at least, that the translations adopt the generally accepted terminology used in the official text of the FATF Recommendations, so that readers may clearly understand the context.

                                                                                        ********

Richard Chalmers

Bill Gilmore

David Meader

Bernard Turner

Boudewijn Verhelst

                                                                                 6 October 2005

Annex 1

Evaluation Criteria and Guidance Notes

I. The Review

1. Quality will be evaluated for each of the three components of an assessment report:

  • a) Description and Analysis including “Analysis of Effectiveness”

  • b) Recommendations and Comments

  • c) Ratings

A. Description and Analysis:

2. Assessors should provide a sufficient description of the measures in place for all sections of the Methodology: criminal justice measures, international co-operation and preventive measures for financial institutions. All substantive points raised by the criteria in the Methodology should be addressed.

  • Issues for the Panel

  • a) Does the description provide sufficient information to support the analysis and the assessment rating?

  • b) Are all substantive points raised by each of the criteria of the Methodology addressed in the description?

  • c) Does the description cover all financial institutions required to be covered by the methodology?

  • d) Did the assessors consider the implementation of the laws and regulations?

  • e) Are the areas of weakness clearly and fully described?

  • f) In an analysis of effectiveness included?

B. Recommendations and Comments

3. Assessors should provide appropriate recommendations to address each of the areas of weakness identified in the preceding analysis. These recommendations should essentially describe what should be done to address the identified weakness and not how it should be done.

  • Issue for the Panel

  • a) Does the assessment contain recommendations that address the areas where weaknesses have been identified?

C. Ratings

4. The Panel should identify any instances where there is a very noticeable difference between the rating given and the description and analysis of the relevant measure that is required under the FATF Recommendations.

  • Issues for the Panel

  • a) Is there a significant mismatch between the rating given and the written findings?

  • b) Did assessors assess and rate compliance with the Recommendations based on the laws, rules, and measures in place at the time of the assessment mission?

D. Consistency of Assessments

5. In evaluating the overall “consistency of assessments,” the Panel should build on its findings regarding the “quality of assessments” in order to answer the following questions:

  • Issue f for the Panel

  • Are the assessment reports reasonably uniform in their format, content and quality or are there any significant variations, (a) between individual assessment reports or (b) between groups of reports?

II. Guidance Notes for the Panel:

  1. Each of the reports subject to the review should be reviewed by a team of experts with expertise in legal, financial and law enforcement. While each of the experts should read and be familiar with the entirety of a report, each should review only those parts of a report that are relevant to his/her main area of expertise.

  2. Where an expert was an assessor for one of the reports being reviewed, he should not review that report. Panel experts should identify these and any other potential conflicts of interest, advise the other members of the Panel and the Co-ordination Group, and the Panel should take steps to address any such conflict or perceived conflict.

  3. Each expert should apply the “review criteria” to his/her respective section of the report (i.e., legal, financial or law enforcement) and complete a worksheet covering each of the headings of the “Evaluation Criteria” (i.e., Description and Analysis, Recommendations and Comments and ratings).

  4. In evaluating the quality of the report under each of the headings, the experts should provide in sufficient detail the reasons for their evaluation and should identify, where applicable, the areas of deficiency.

  5. In evaluating the overall “consistency of assessments,” the Panel should, to the extent possible, seek to identify whether the assessment reports have the same ratings for comparable levels of deficiency. The panel report should be fully transparent and experts should provide sufficient details regarding any identified lack of consistency.

  6. In the event that the Panel needs further guidance on the requirements under the FATF Recommendations, the Methodology or the format and contents of assessment reports, the Panel should consult with the Co-ordination Group.

  7. Deliverables: The technical experts will document their findings in the following forms:

    1. In reviewing individual reports, each expert will complete a “worksheet” incorporating his/her analysis of the relevant sections of the report under the various headings of the “Evaluation Criteria.”

    2. In reviewing each individual report, the team of experts will produce a “summary note” indicating: their evaluation of the quality of the report, and any other comments that are relevant to the Panel’s overall analysis of quality or consistency.

    3. In reviewing the overall “Consistency of Assessments”: the Panel will produce a “summary note” summarizing the Panel’s evaluation of consistency across assessments and/or groups of assessments; explaining the reasons for the Panel’s findings, and providing examples of inconsistencies.

    4. The Panel should build on the three previous deliverables to produce a Final Report answering the two questions put to the Panel: Is the quality of the assessments satisfactory? and Is the approach to the assessments consistent, or are there significant variations between individual assessments and between groups of assessments?

    5. All “Deliverables” will be submitted to the “Co-ordination Group.”

Annex 2

Nature of the material or serious deficiencies in the Description/Analysis section

article image
KEY 1 = Incomplete coverage of core criteria 2 = Incomplete coverage of sector-specific criteria 3 = Lack of adequate detail 4 = Inadequate general analysis 5 = Inadequate discussion of effectiveness 6 = Inadequate discussion of weaknesses 7 = Irrelevant issues discussed 8 = Not all relevant financial institutions covered Orange = material deficiency Red = serious deficiency White = good quality

Annex 3

Nature of the material and serious deficiencies in the Recommendations section

article image
Key 1 = Recommendation does not adequately address the identified weaknesses 2 = Recommendation not supported by any description or analysis 3 = Recommendation too vague or general, or too difficult to identify 4 = Recommendation conflicts with description or analysis Orange = material deficiency Red = serious deficiency White = good quality

Annex 4

Nature of the material and serious deficiencies in the Ratings section

article image
Key 1 = Clear mismatch between rating and text 2 = Lack of clear evidence to support rating 3 = Rating not supported by any description or analysis 4 = No (or incomplete) rating given Orange = material deficiency Red = serious deficiency White = good quality

Annex 5

Notes on the consistency of the format of reports

APG

  • a) The overall presentation of each report is uniform, and follows a normal paragraph format rather than using the standard template. However, the structure of the chapters and sections follows that of the template.

  • b) The reports contain a detailed table of contents and list of acronyms.

  • c) Both reports contain a detailed introductory section providing economic, financial and institutional background.

  • d) There is inconsistency between the reports on whether summary reasons for the ratings are given within the ratings section of the detailed assessment.

  • e) While the treatment of various ratings within the template is generally consistent between the reports, they differ in certain respects from those employed by some of the other groups (e.g. R14, R18, R26, R28 and SRIV).

  • f) Three tables are attached to each report: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and a recommended action plan.

CFATF

  • a) One of the reports uses the standard box template for the detailed assessment, while the other two follow the standard sequencing, but in normal paragraph format. One report provides the description, analysis and recommendations on a criterion-by-criterion basis and itemises the sector-specific criteria in a separate template, while the other two use a consolidated approach to dealing with the criteria for each chapter.

  • b) None of the reports contains a table of contents or list of acronyms.

  • c) There is inconsistency within and between the reports on whether summary reasons for the ratings are given within the ratings section of the detailed assessment. One report failed to rate four FATF Recommendations on the grounds that the team was “unable to assess due to limited information”.

  • d) The format for the introductory part of the report varies. In one case there is very little by way of institutional scene-setting, which is deferred until much later in the report.

  • e) All three reports extend (in accordance with its Ministerial mandate) beyond the 2002 methodology to include an assessment of the CFATF 19 Recommendations and the FATF’s 25 Criteria for the NCCT exercise. Each report contains separate compliance tables for these standards. However, in none of the reports is it made explicit the extent to which the text is constructed to take account uniquely of these issues, nor is coverage of these issues consistent between reports. This results in some confusion as to why certain issues are discussed and what significance they might have.

  • f) Two of the reports related to jurisdictions that were associated with regional institutions with AML/CFT responsibilities. Neither report specified precisely how the assessment of the role of these bodies was undertaken.

  • g) There are inconsistencies between reports on the treatment of the errors in the template relating to SRVII and SRVIII.

  • h) Five tables are attached to each report: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, an action plan, a summary of compliance with the CFATF 19 Recommendations, and compliance with the FATF NCCT criteria.

ESSAMLG

  • a) The report adopted a normal paragraph format rather than the special box template, but was structured in line with the template.

  • b) There is no table of contents or list of acronyms.

  • c) An extensive introductory section provides detailed information on the economic, financial sector and institutional background.

  • d) Three tables are attached: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and a recommended action plan.

FATF

  • a) With one major exception (see (c) below) and some very minor variations, the reports all have a consistent format presented in normal paragraph form, but following the overall structure of the standard template.

  • b) All three reports contain a detailed table of contents and list of acronyms.

  • c) One of the reports consistently includes a section providing an analysis of effectiveness; the second has a section entitled simply “analysis” in most chapters, but this is not included in some of chapters on the preventive measures; the third has a section headed “results achieved by current measures”, which in several instances is mostly descriptive rather than an analysis of effectiveness.

  • d) One report is significantly longer than the others. This is not entirely explained by the relative complexity of the jurisdiction, and in large measure appears to arise from a degree of repetition in the presentation of the same material in both the description and recommendation sections.

  • e) Three tables are attached to each report. All three reports contain a summary of the compliance ratings and an action plan. In addition, one contains a summary of effectiveness of the measures under each chapter heading, while the other two include a table identifying problem areas to be addressed. In two of the reports the tables are followed by extensive extracts of the relevant national laws, while the third omits any such information.

GAFISUD

  • a) The reports contain an introductory note to indicate that, following a GAFISUD decision of December 2002, the “traditional” reports were converted to adapt to the 2002 methodology. It goes on to state that the evaluators did not use the methodology during the on-site mission, but integrated it subsequently into the report, even when the evaluation took place after the body had adopted the methodology.

  • b) The reports all use the standard box template for the detailed assessment.

  • c) The format does not include a table of contents or list of acronyms.

  • d) One report provides a summary of the reasons for the rating for some chapters within the rating section of the template; the other two do not provide any summary at all.

  • e) Two of the reports contained extensive introductory sections; the third provided only very brief background information.

  • f) There were a number of inconsistencies between the reports in relation to whether or not specific Recommendations were rated within different sections of the report (i.e. varying treatment of SRI, SRIII, SRV, SRVIII, R18 and R28).

  • g) Three tables are attached to each report: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and a recommended action plan.

Moneyval

  • a) All the reports were in a common format, with the exception of one report for which a ROSC-style executive summary had been produced to comply with the IMF’s FSAP requirements. The format adopted throughout for the detailed assessment is a normal paragraph presentation, rather than the box template.

  • b) None of the reports contained a table of contents or list of acronyms.

  • c) Two of the reports contain extensive introductory sections, describing the economic, financial and institutional background to the country. The third provides only a relative brief introduction which omits a description of the financial sector.

  • d) Two of the reports provide a separate commentary on the different components of the financial sector under the preventive measures chapters. The third consolidates the description making it less clear as to the level of compliance by each type of institution.

  • e) There is a consistent practice of providing a brief justification for each of the assessors’ recommendations, but this practice is not extended to the ratings.

  • f) Each report contained three tables appended to the report: a summary rating of compliance with the FATF 40 + 8, a list of key problem areas to be resolved, and a recommended action plan. In all cases these are followed by extensive extracts of relevant national legislation.

  • g) There is inconsistency in the manner in which the typographical error in the original template relating to SRVII/VIII has been addressed. In two reports no rating is provided against the misstated SRVIII, but in the other report the error is corrected and a rating given against SRVII.

IMF

  • a) All the reports use the standard template for the detailed assessment section.

  • b) Two of the reports contain a summarised table of contents, the third has nothing.

  • c) All three reports used different formats for the introductory section. This had no material impact on the relative quality of the reports.

  • d) None of the reports contained a general description of the country or the financial system. This makes it more difficult to assess the importance of particular issues referred to, or absent, in the text.

  • e) Atypically, SRVII is not rated on the basis that the FATF has set a future date for final implementation of its key provision.

  • f) The reports showed a lack of consistency in the treatment of the IAE. In one case there were no italics to identify where the IAE might have contributed, whereas the others clearly separated out the contribution.

  • g) Three tables are attached to each report: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and a recommended action plan. In all cases these are followed by a section providing the authorities’ response to the assessment, but this section has been left blank in one report.

World Bank

  • a) All the reports follow a common format, using the standard box template.

  • b) None of the reports includes a table of contents or list of acronyms.

  • c) One of the reports separately addresses issues on a criterion-by-criterion basis in the descriptive section, while the other two consolidate the text.

  • d) The treatment of the IAE contribution varies between the reports and within individual reports. One indicates that World Bank staff reviewed “the capacity and implementation of criminal law enforcement systems” despite having an IAE on the mission, while the other two state that such work was undertaken by the IAE. Italicised text is not used uniquely to indicate the contribution of the IAE, and there is no explanation of the use of this convention. Different parts within each of the reports were marked with italics, suggesting that there was no consistency in the allocation of responsibilities between the IAE and Bank staff.

  • e) The introductory sections of all three reports go significantly beyond providing background information on the legal, financial and institutional arrangements, as they include analysis of, and recommendations relating to the AML/CFT regime.

  • f) One report deviates significantly from the normal treatment of the ratings section in each chapter. Instead of stating a specific rating for the relevant FATF Recommendation, it provides a narrative from which the reader may only infer the rating. Specific identifiers are included only in the summary table at the end of the report.

  • g) Three tables are attached to each report: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and an action plan. In one case this is followed by a section to record the authorities’ response to the assessment, but this has not been completed. The other two reports have no such section.

Joint IMF/World Bank

  • a) Each report utilised a quite different format. One provided description, recommendations and ratings for each criterion individually, while the other followed the more normal template. It is understood that the former was completed before the structure of the standard box template had been agreed.

  • b) Neither report contained a general description of the country, its economy or the financial system. This makes it more difficult to assess the importance of particular issues referred to, or absent, in the text.

  • c) Neither report contained a table of contents or list of acronyms.

  • d) Three tables are attached to one of the reports: a summary of the compliance ratings, a summary of effectiveness of the measures under each chapter heading, and an action plan. The other report contains only a summary of compliance with the ratings.

1

Twelve-Month Pilot Program of Anti-Money Laundering and Combating the Financing of Terrorism (AML/CFT) Assessments—Joint Report on the Review of the Pilot Program.

2

Only those FSRBs that participated in the pilot program, that is, APG, CFATF, ESAAMLG, GAFISUD, and MONEYVAL. OGBS did not participate in the pilot program, and two FSRBs, EAG and MENAFATF, have been established since the pilot program.

3

This was compounded by the rating scale used which differed then from the one used for other standards.

4

PIN 04/33, April 2, 2004.

5

See PIN 05/47, April 6, 2005, March 2005 for a discussion of the policy on updates in the context of FSAPs and FSAP updates, and PIN 05/106, August 8, 2005, for a discussion of the policy on updates in the context of the standards and codes initative. A key issue has been to lower the costs of the initatives through improved prioritization and streamlining of assesments and updates that are better tailored to country circumstances.

6

The policy was made operational through an exchange of letters with the Presidents of the FATF and participating FSRBs in July 2004.

7

In instances where the findings from FATF/FSRBs were not available, AML/CFT was covered in the FSSAs/OFC assessment reports using other available information on AML/CFT, such as from Basel Core Principles assessments or in one case in a supplementary statement at the Board meeting.

8

In one case where the FSAP called for an AML/CFT assessment out of cycle with the FATF schedule, the country agreed to the Fund conducting the assessment, but declined to have the assessment presented to the FATF as a mutual evaluation, preferring instead to retain the schedule for the FATF assessment even though this would involve a second assessment.

9

PIN 05/106, August 8, 2005.

10

PIN 05/47, April 6, 2006.

11

For other financial sector codes and standards full reassessments, there is no policy as to the timing of reassessments, which are decided on a case by case basis. On average, it has been done less frequently than 5 years.

12

A factual update consists in an analysis of key developments regarding observance of a standard. It does not include a reassessment of the underlying ratings.

It results in a ROSC update, which complements a previous ROSC. See The Standards and Codes Initiative—Is It Effective? And How Can it Be Improved? Page 14, (http://www.imf.org/external/np/pp/eng/2005/070105a.pdf).

13

Introduction to the 2003 FATF Recommendations.

14

The IMF and World Bank describe their process as “assessments”, while the FATF and FSRBs use the term “mutual evaluations”. For ease of reference in this report the terms “assessment” is used to refer to both.

15

This methodology was agreed in October 2002 by the FATF, IMF and World Bank, but was only progressively endorsed by the FSRBs over the period of the pilot programme. This methodology has now been superseded by the 2004 methodology, introduced following the revision of the FATF Recommendations in June 2003. No assessments using the 2002 methodology have been undertaken since early-2005.

16

Each report contained fourteen chapters (see “General Issues” below)

17

The IAE was also nominally responsible for addressing the “non-macro-relevant” financial intermediaries, which also fell outside the Fund/Bank mandate.

18

The list comprised a total of about sixty reports undertaken by the APG, CFATF, ESAAMLG, FATF, GAFISUD, Moneyval, IMF, World Bank and jointly by the IMF and World Bank.

19

It should also be noted that three of the reports within the sample were produced by two of the FSRBs with technical assistance from the World Bank.

20

As indicated in the discussion on the individual group reports, some groups adopted the stylised box template, but others simply took the section headings for use within a standard narrative format.

21

As far as possible throughout the report the panel has adopted terms, or variations thereof, used within its terms of reference and instructions.

22

6% of these sections contained no recommendations and it was not possible to determine whether or not this was an appropriate response.

23

4% of these chapters contained no rating due to inconsistent treatment of errors in the template.

24

In some cases there may have been a statement in the introductory sections of the report to indicate the absence of any legislation on terrorist financing, but this was not necessarily carried through into the body of the detailed assessment.

25

All the reports were provided to the panel in English, although in some cases the text adopted by the relevant FSRB may have been in another language.

26

One chapter may, of course, be affected by more than one deficiency, resulting in the sum of this column exceeding 100%.

27

See also the discussion later under Review of Consistency – Format of reports.

28

The panel notes that the 2002 methodology was not adopted by GAFISUD until July 2003, and that all the reports were originally completed in accordance with previous procedures, but were subsequently converted in line with the methodology as a desk-based exercise.

29

See further the section above on “limitations to the terms of reference”.

30

It should also be noted that the description and analysis related to the specific criteria in the 2002 methodology, whereas the ratings were tied to the FATF 40 + 8 Recommendations. In some cases, it was not entirely clear whether the ratings being given were more a reflection of the position relative to the criteria or strictly to the Recommendations when the latter were narrower in scope.

31

Although this becomes an extremely difficult judgment call if the implication of a deficiency is that the assessor should have considered a different rating within one chapter that might impact the composite rating for the relevant FATF Recommendation.

32

The panel notes that the ratings in the schedule were a composite achieved by mapping ratings from a number of relevant sections within the report, and that there could be no presumption that the rating given to any one component would necessarily match the final composite, which would depend on the assessors’ perception of the relative importance of the underlying criteria. Therefore, failure to apply ratings in the individual sections was regarded as a deficiency.

33

The chapters on enforcement contained relatively few deficiencies, but the practice of five of the nine assessor groups had been not to provide a rating in this section.

34

It has been pointed out to the panel that, in the case of one of the federal reports, the assessor body was aware of the complexity and implications of its approach, but that a clear decision was taken to provide one compliance rating for a single legal jurisdiction.

35

The definition of “largely compliant” in the methodology includes the possibility of “when corrective actions to achieve full observance with the requirement are readily identified and have been scheduled within a reasonable period of time”. The panel did not consider that this extends to giving value to draft legislation that was targeted for enactment within a specified timeframe, since the final text of a draft law and the actual timeframe for enactment will always be uncertain until passage through parliament has been completed.

36

This does not mean that the 2002 methodology did not give rise to some significant problems of interpretation and application, but simply that there appear to be no factors within it that consistently gave rise to poor quality output by all the assessor bodies, with the possible exception of the treatment of effectiveness mentioned elsewhere in the report.

37

See also the earlier discussion of the typographical and “mapping” errors in the template that gave rise to inconsistent treatments.

38

This last report was undertaken very early during the pilot programme, and it is understood that only this criterion-by-criterion template was available at the time

  • Collapse
  • Expand