Annex I Technical Review of the Quality and Consistency of AML/CFT Assessments
TECHNICAL REVIEW OF THE QUALITY AND CONSISTENCY OF AML/CFT ASSESSMENTS
REPORT BY THE INDEPENDENT PANEL OF EXPERTS
(On behalf of the IMF and World Bank)
6 October 2005
Twelve-Month Pilot Program of Anti-Money Laundering and Combating the Financing of Terrorism (AML/CFT) Assessments—Joint Report on the Review of the Pilot Program.
Only those FSRBs that participated in the pilot program, that is, APG, CFATF, ESAAMLG, GAFISUD, and MONEYVAL. OGBS did not participate in the pilot program, and two FSRBs, EAG and MENAFATF, have been established since the pilot program.
This was compounded by the rating scale used which differed then from the one used for other standards.
PIN 04/33, April 2, 2004.
See PIN 05/47, April 6, 2005, March 2005 for a discussion of the policy on updates in the context of FSAPs and FSAP updates, and PIN 05/106, August 8, 2005, for a discussion of the policy on updates in the context of the standards and codes initative. A key issue has been to lower the costs of the initatives through improved prioritization and streamlining of assesments and updates that are better tailored to country circumstances.
The policy was made operational through an exchange of letters with the Presidents of the FATF and participating FSRBs in July 2004.
In instances where the findings from FATF/FSRBs were not available, AML/CFT was covered in the FSSAs/OFC assessment reports using other available information on AML/CFT, such as from Basel Core Principles assessments or in one case in a supplementary statement at the Board meeting.
In one case where the FSAP called for an AML/CFT assessment out of cycle with the FATF schedule, the country agreed to the Fund conducting the assessment, but declined to have the assessment presented to the FATF as a mutual evaluation, preferring instead to retain the schedule for the FATF assessment even though this would involve a second assessment.
PIN 05/106, August 8, 2005.
PIN 05/47, April 6, 2006.
For other financial sector codes and standards full reassessments, there is no policy as to the timing of reassessments, which are decided on a case by case basis. On average, it has been done less frequently than 5 years.
A factual update consists in an analysis of key developments regarding observance of a standard. It does not include a reassessment of the underlying ratings.
It results in a ROSC update, which complements a previous ROSC. See The Standards and Codes Initiative—Is It Effective? And How Can it Be Improved? Page 14, (http://www.imf.org/external/np/pp/eng/2005/070105a.pdf).
Introduction to the 2003 FATF Recommendations.
The IMF and World Bank describe their process as “assessments”, while the FATF and FSRBs use the term “mutual evaluations”. For ease of reference in this report the terms “assessment” is used to refer to both.
This methodology was agreed in October 2002 by the FATF, IMF and World Bank, but was only progressively endorsed by the FSRBs over the period of the pilot programme. This methodology has now been superseded by the 2004 methodology, introduced following the revision of the FATF Recommendations in June 2003. No assessments using the 2002 methodology have been undertaken since early-2005.
Each report contained fourteen chapters (see “General Issues” below)
The IAE was also nominally responsible for addressing the “non-macro-relevant” financial intermediaries, which also fell outside the Fund/Bank mandate.
The list comprised a total of about sixty reports undertaken by the APG, CFATF, ESAAMLG, FATF, GAFISUD, Moneyval, IMF, World Bank and jointly by the IMF and World Bank.
It should also be noted that three of the reports within the sample were produced by two of the FSRBs with technical assistance from the World Bank.
As indicated in the discussion on the individual group reports, some groups adopted the stylised box template, but others simply took the section headings for use within a standard narrative format.
As far as possible throughout the report the panel has adopted terms, or variations thereof, used within its terms of reference and instructions.
6% of these sections contained no recommendations and it was not possible to determine whether or not this was an appropriate response.
4% of these chapters contained no rating due to inconsistent treatment of errors in the template.
In some cases there may have been a statement in the introductory sections of the report to indicate the absence of any legislation on terrorist financing, but this was not necessarily carried through into the body of the detailed assessment.
All the reports were provided to the panel in English, although in some cases the text adopted by the relevant FSRB may have been in another language.
One chapter may, of course, be affected by more than one deficiency, resulting in the sum of this column exceeding 100%.
See also the discussion later under Review of Consistency – Format of reports.
The panel notes that the 2002 methodology was not adopted by GAFISUD until July 2003, and that all the reports were originally completed in accordance with previous procedures, but were subsequently converted in line with the methodology as a desk-based exercise.
See further the section above on “limitations to the terms of reference”.
It should also be noted that the description and analysis related to the specific criteria in the 2002 methodology, whereas the ratings were tied to the FATF 40 + 8 Recommendations. In some cases, it was not entirely clear whether the ratings being given were more a reflection of the position relative to the criteria or strictly to the Recommendations when the latter were narrower in scope.
Although this becomes an extremely difficult judgment call if the implication of a deficiency is that the assessor should have considered a different rating within one chapter that might impact the composite rating for the relevant FATF Recommendation.
The panel notes that the ratings in the schedule were a composite achieved by mapping ratings from a number of relevant sections within the report, and that there could be no presumption that the rating given to any one component would necessarily match the final composite, which would depend on the assessors’ perception of the relative importance of the underlying criteria. Therefore, failure to apply ratings in the individual sections was regarded as a deficiency.
The chapters on enforcement contained relatively few deficiencies, but the practice of five of the nine assessor groups had been not to provide a rating in this section.
It has been pointed out to the panel that, in the case of one of the federal reports, the assessor body was aware of the complexity and implications of its approach, but that a clear decision was taken to provide one compliance rating for a single legal jurisdiction.
The definition of “largely compliant” in the methodology includes the possibility of “when corrective actions to achieve full observance with the requirement are readily identified and have been scheduled within a reasonable period of time”. The panel did not consider that this extends to giving value to draft legislation that was targeted for enactment within a specified timeframe, since the final text of a draft law and the actual timeframe for enactment will always be uncertain until passage through parliament has been completed.
This does not mean that the 2002 methodology did not give rise to some significant problems of interpretation and application, but simply that there appear to be no factors within it that consistently gave rise to poor quality output by all the assessor bodies, with the possible exception of the treatment of effectiveness mentioned elsewhere in the report.
See also the earlier discussion of the typographical and “mapping” errors in the template that gave rise to inconsistent treatments.
This last report was undertaken very early during the pilot programme, and it is understood that only this criterion-by-criterion template was available at the time