Chapter

CHAPTER 3 Cross-Cutting Issues and Lessons from the Crisis

Author(s):
International Monetary Fund. Independent Evaluation Office
Published Date:
July 2009
Share
  • ShareShare
Show Summary Details

Previous IEO Annual Reports identified common themes emerging from earlier evaluations. The FY2007 Annual Report emphasized the need for:

  • (1) better management of institutional change at the IMF;

  • (2) greater clarity about the goals of various IMF initiatives and a properly aligned external communications policy;

  • (3) strengthened partnerships between the IMF and other international financial institutions (IFIs) and donors; and

  • (4) clearer metrics for the assessment of the impact of IMF’s policy advice and whether the IMF is meeting its commitments to countries.

The FY2008 Annual Report added a new theme, based largely on the findings of the evaluation on the Governance of the IMF, namely:

  • (5) the need for the IMF to be more explicit about who is accountable for what, and to whom.

The rapid unfolding of the global financial crisis has dramatically increased the importance of this theme and of the IMF carrying out its activities in a professional and fully accountable manner.

The IMF is charged with helping prevent and responding to systemic crises as a central part of its mandate. When economies are growing and the financial system seems strong, the demand for accountability is weak. But today the world can no longer afford a lack of accountability at the Fund. Crisis prevention requires continued vigilance by all, including by the IEO. Evaluation clearly has an important role in identifying opportunities and threats to the IMF’s ability to carry out its mission—helping to prevent catastrophic crises and to manage them when prevention fails.

It is worth considering what the IEO has already learned from the crisis:

  • (1) There is a need to be even more pointed in challenging the evenhandedness of Management and staff in dealing with members. Lack of even-handedness may have turned out to be the Fund’s Achilles’ heel in pursuing its mission of global stability. In 2005, the IEO recommended that the IMF make it more difficult for large countries to opt out of the Financial Sector Assessment Program (FSAP).

  • (2) On governance, perhaps IEO should examine more critically the Fund’s ability to “speak truth to power,” and highlight the risks of not doing so when the members that pose the greatest systemic risk are also the largest shareholders.

  • (3) Both the IMF and the IEO must be bolder in identifying and highlighting downside risks—the Fund in the context of surveillance-related assessments such as the FSAP, and the IEO in highlighting Management and staff failures to follow up on evaluation recommendations.

  • (4) There is a need to follow up on evaluation findings. In the example of the financial crisis, evaluations had clearly identified the need to enhance the Fund’s macro-financial sector links; had clearly identified the FSAP loophole as a major issue; and had clearly identified the need for much greater connectivity between bilateral and multilateral surveillance. But little happened after these problems were identified.

The IEO’s evaluations have increasingly sought to identify the real drivers of decision making within the Fund—that is, the Executive Board, Management, and staff—and to spell out the incentives and other factors that are creating the identified problems. However, effective evaluation needs to go even further: it must develop a constituency for change that can use evaluation findings for advocacy. This is why transparency and outreach are so important for IEO. To be effective, this constituency also needs to know the roles and responsibilities within the institution, and it needs clear metrics for tracking whether the undertakings and goals are being met and to what effect. Evaluation needs to provide these metrics as well.

Once all of these ingredients are lined up—evaluation evidence about a problem, an understanding of why it is the way it is, a constituency for change, an understanding of who’s in charge of what, and clear indicators for monitoring progress—evaluators have a good chance of being effective. But if we merely develop the evaluation evidence, our efforts will be measured only in reports and not in effective learning or change.

    Other Resources Citing This Publication