Guide To Monitoring and Evaluation Capacity-Building Interventions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 110

A Guide to Monitoring and Evaluation of

Capacity-Building Interventions in the Health


Sector in Developing Countries

March 2003

MEASURE Evaluation Manual Series, No. 7

MEASURE Evaluation Project

Anne LaFond, MS, JSI Research &Training Institute, Inc.

Lisanne Brown, PhD, Tulane University

The manual series is made possible by support from USAID under the terms of Cooperative
Agreement HRN-A-00-97-00018-00. The opinions expressed are those of the authors, and do
not necessarily reflect the views of USAID.

March 2003 Printed on recycled paper


Other Titles in the Manual Series

NO. 1 Evaluando Proyectos de Prevención de VIH/SIDA: Un Manual con Enfoque


en las Organizaciones No Gubernamentales. July 2000.

NO. 2 Quick Investigation of Quality (QIQ): A User's Guide for Monitoring Qual-
ity of Care. February 2001.

NO. 3 Sampling Manual for Facility Surveys for Population, Maternal Health,
Child Health and STD Programs in Developing Countries. July 2001.

NO. 4 Measuring Maternal Mortality from a Census: Guidelines for Potential Us-
ers, July 2001.

NO. 5 A Trainer's Guide to the Fundamentals of Monitoring and Evaluation for


Population, Health, and Nutrition Programs, 2002.

NO. 6 Compendium of Indicators for Evaluating Reproductive Health Programs,


August 2002.

Recommended Citation
LaFond, Anne and Brown, Lisanne. A Guide to Monitoring and Evaluation of Capacity-Building Inter-
ventions in the Health Sector in Developing Countries. MEASURE Evaluation Manual Series, No. 7.
Carolina Population Center, University of North Carolina at Chapel Hill. 2003.
Acknowledgements

We wish to acknowledge the contributions and support of a number of individuals and institu-
tions that enabled the successful completion of this document. Ray Kirkland and Krista Stewart
of USAID were instrumental in the conception of the Guide. Sara Pacque-Margolis of USAID
provided the support to see it through to completion. Our sincere gratitude also goes to several
technical reviewers for their constructive and instructive comments on earlier versions of the
Guide. They are: Alfredo Fort (PRIME II), Diane Catotti (IPAS), Alison Ellis (MSH), Leo Ryan
(CSTS/ORC Macro), Eric Sarriot (CSTS/ORC Macro), Fred Carden (IDRC), and Doug Horton
(ISNAR). Kate Macintyre contributed her ideas and encouragement, as well as provided the
SAIDIA case material. Catherine Elkins and Kate Macintyre contributed to the MEASURE
working paper on measuring capacity in the health sector, which provided a basis for this guide.
Thom Eisele and Cira Endley reviewed and analyzed capacity-measurement tools and practices.
Case examples of capacity measurement were developed with the cooperation of PRIME /
INTRAH; SAIDIA; NGO Networks for Health; and PATH (in a workshop setting). Finally, we
are grateful to the many adventurous organizations and individuals working to build capacity in
the health sector in developing countries. Their experimentation in capacity-building monitoring
and evaluation is commendable and deserves further study. This guide would not have been pos-
sible without the support of the Offices of Health and Population at the United States Agency for
International Development (Contract Number: HRN-A-00-97-00018- 00).

Acknowledgements i
Prologue

Capacity development1 has moved to center stage of the agendas of development organizations.
Substantial sums are being invested in capacity-building programs. Yet, their design and man-
agement leave much to be desired. Marred by untested, unrealistic assumptions, the results of
many programs fall short of their goals and expectations.

“Evaluations are needed to test the theories and assumptions on which capacity development
programs are based, to document their results, and to draw lessons for improving future pro-
grams. However, few capacity development programs have been systematically and thoroughly
evaluated” (Horton et al., 2000).

1
Capacity building and capacity development are used interchangeably throughout this document.

Prologue iii
List of Acronyms and Abbreviations

API AIDS Program Effort Index


BASICS Basic Support for Institutionalizing Child Survival
CHW Community Health Worker
DHS Demographic and Health Survey
DOSA Discussion-Oriented Organization Self-Assessment
FHI Family Health International
FP Family Planning
FPE Family Planning Effort
FPEI Family Planning Effort Index
FPMD Family Planning Management Development Project
FPPE Family Planning Program Effort
HR Human Resources
IAI Institutional Assessment Instrument
IDA Institutional Development Assessment
IDRC International Development Research Centre
IEC information, education, and communication
IHFA Integrated Health Facility Assessment
IISD International Institute for Sustainable Development
ISNAR International Service for National Agricultural Research
M&E Monitoring and Evaluation
MDA Management Development Assessment
MEASURE Monitoring and Evaluation to ASsess and Use REsults
MES Materials, Equipment, and Supplies
MFSS Management/Financial Sustainability Scale
MIS Management Information System
MOH Ministry of Health
MOST Management and Organizational Sustainability Tool
MSH Management Sciences for Health
NGO Nongovernmental Organization
OCAT Organizational Capacity Assessment Tool
OSI Outcome Sustainability Index
PASCA Program for NGOs that provide HIV/AIDS services in Central America
PHR Partnership for Health Reform
PI performance improvement
PROSE Participatory, Results-Oriented Self-Evaluation
PSI Program Sustainability Index
RH Reproductive Health
SAIDIA Local Kenyan NGO
SFPS Santé Familiale et Prévention du SIDA
STD Sexually Transmitted Disease
TOT Training of Trainers
WHO World Health Organization

Acronyms and Abbreviations v


Table of Contents

Acknowledgements.......................................................................................................................... i

Prologue ......................................................................................................................................... iii

List of Acronyms and Abbreviations.............................................................................................. v

About This Guide............................................................................................................................ 1


Structure of the Guide................................................................................................................. 2

Introduction..................................................................................................................................... 3
Defining Capacity-Building Monitoring and Evaluation ........................................................... 4
Capacity-Building M&E Has Many Roles ................................................................................. 5

Part 1. Concepts, Definitions, and Attributes of Capacity and Capacity Building ........................ 7
Why Build Capacity?.................................................................................................................. 7
What is Capacity Building? ........................................................................................................ 7
Useful Definitions ...................................................................................................................... 7
Attributes of Capacity and Capacity Building............................................................................ 7
Capacity Building Is Behavior Change ...................................................................................... 9
Why Monitor and Evaluate Capacity Building?....................................................................... 11
What Is Different about M&E of Capacity Building?.............................................................. 11
Implications for Capacity-Building M&E ................................................................................ 12
Summary for Managers and Evaluators.................................................................................... 12

Part 2. Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual
Framework ........................................................................................................................ 15
Overview Framework: The Role of Capacity in the Health Sector.......................................... 15
Capacity at a Single Level ........................................................................................................ 17
Defining Variables Related to Capacity and Performance ....................................................... 18
Using These Conceptual Frameworks ...................................................................................... 25
Summary for Managers and Evaluators.................................................................................... 26

Part 3. Monitoring and Evaluating Capacity-Building Interventions........................................... 27


STEP 1 Define the Purpose of the Evaluation......................................................................... 28
STEP 2 Define Performance Objectives.................................................................................. 30
Defining Performance........................................................................................................... 30
STEP 3 Mapping Capacity: Build a Conceptual Framework for a Specific Capacity-Building
Intervention ................................................................................................................ 32
When to Map Capacity ......................................................................................................... 32
How to Map Capacity ........................................................................................................... 33
Single-Level Capacity Mapping ........................................................................................... 34
Multi-Level Capacity Mapping............................................................................................. 36
Dealing with Context ............................................................................................................ 36
Interpreting and Using Capacity Maps ................................................................................. 41

Table of Contents vii


STEP 4 Identify Capacity Indicators ....................................................................................... 44
What Are Capacity Indicators?............................................................................................. 44
Working with Capacity Indicators ........................................................................................ 45
Lessons for Indicator Development ...................................................................................... 45
STEP 5 Identify Appropriate Methodological Approach and Sources of Data....................... 55
Methodological Approaches and Challenges........................................................................ 55
Tackling Methodological Challenges ................................................................................... 56
Sources of Data ..................................................................................................................... 57
Tools for Measuring Capacity at Different Levels ............................................................... 58
STEP 6 Develop an Implementation and Dissemination Plan ................................................ 65

Part 4. Summary Checklist: Steps for Designing a Capacity-Building M&E Plan ..................... 67
Checklist: Steps in Designing a Capacity-Building M&E Plan ............................................... 67

Annex A. Example of Scoring Used for Measuring Capacity Building in Training, PRIME I . 71

Annex B. Example of Results of PRIME Training Capacity Index .......................................... 75

Annex C. Key Internet Resources for Monitoring and Evaluating Capacity-Building


Interventions ............................................................................................................... 77

Annex D. Capacity Mapping and Performance Improvement Compared .................................. 91

Glossary ........................................................................................................................................ 93

Bibliography ................................................................................................................................. 95

viii
Tables

Table 1 The Use of Assessment vs. M&E in Capacity-Building Intervention ........................ 4

Table 2 Capacity and Performance Variables Defined .......................................................... 20

Table 3 Questions Posed by Different Types of Capacity-Building M&E ............................ 28

Table 4 Examples of Capacity Indicators in Current Use in Health Programs ...................... 52

Table 5 Examples of Performance Indicators in Current Use in Health Programs................ 53

Table 6 Example of a Table of Data Sources for an Organizational Assessment .................. 60

Table 7 Capacity Measurement Tools .................................................................................... 61

Figures

Figure 1 Overview of Capacity in the Health Sector............................................................... 16

Figure 2 Health System Capacity ............................................................................................ 19

Figure 3 Health Service and Civil Society Organization Capacity ......................................... 21

Figure 4 Health Program Personnel Capacity.......................................................................... 23

Figure 5 Individual/Community Capacity .............................................................................. 24

Boxes

Box 1 Capacity Measurement Case Examples ...................................................................... 1

Box 2 Measuring the Effectiveness of Capacity Building in Training: PRIME I ................. 8

Box 3 Examples of Organizational Capacities .................................................................... 10

Box 4 Six Steps for Developing a Capacity-Building M&E Plan ....................................... 27

Box 5 DO'S AND DON’TS of Developing an M&E Plan for a Capacity-Building


Intervention ............................................................................................................... 29

Box 6 Characteristics of a Good Performance Objective .................................................... 31

Box 7 The Process of Capacity Mapping............................................................................. 32

Tables, Figures and Boxes ix


Box 8 Questions to Guide Discussion for Capacity Mapping ............................................. 34

Box 9 Guidance on Capacity Mapping ................................................................................ 35

Box 10 Questions to Guide Discussion on the External Environment and Its Influence on
Organizational Capacity ............................................................................................ 42

Box 11 Examples of Capacity Indicators from Non-health Sector Capacity-Building


Interventions.............................................................................................................. 46

Box 12 PASCA: From Self-Assessment to External Assessment ......................................... 56

Box 13 Advantages and Disadvantages of Self-Assessment and External Assessment


Techniques................................................................................................................. 59

Maps

Map 1 Organizational Capacity Map - Single Level ............................................................ 37

Map 2 Organizational Capacity Map - Single Level ............................................................ 38

Map 3 Organizational Capacity Map - Single Level ............................................................ 39

Map 4 Community Capacity Map on Multiple Levels......................................................... 40

Map 5a Mapping Capacity First Iteration .............................................................................. 48

Map 5b Mapping Capacity Second Iteration .......................................................................... 49

Map 6 Community Capacity Map on Multiple Levels with Indicators................................ 50

x
About This Guide

This guide has grown out of the collective LaFond, and Macintyre, 2001); a review of
experience of health and development organi- capacity-building measurement tools and in-
zations working to build health sector capac- dicators; formal and informal consultations
ity in developing countries. The focus of the with practitioners; and an in-depth exploration
Guide is the measurement of capacity for the of four different capacity measurement expe-
purpose of monitoring and evaluating capac- riences (Box 1). The Guide also draws on
ity-building interventions. It responds to a lessons learned about capacity-building
demand among public health planners, evalu- monitoring and evaluation in other sectors,
ators, and practitioners for advice on assess- such as agriculture and housing, and on new
ing the many aspects of health programming evaluation approaches designed to support
that fall under the rubric of capacity building. learning in development programming (Hor-
ton et al., 2000; Morgan, 1997; Earl, Carden,
The purpose of this guide is to assist health and Smutylo, 2001).
planners and evaluators to

¨ gain a clear understanding of the concepts Box 1: Capacity Measurement Case


of capacity and capacity building Examples
¨ critically evaluate the strengths and limi-
tations of current approaches to capacity ¨ SAIDIA, a health and community devel-
measurement opment nongovernmental organization
(NGO) in Kenya;
¨ design a capacity-building M&E plan that
¨ the PRIME I and PRIME II index of
outlines a systematic approach to meas-
capacity of training institutions;
uring capacity and assessing the results of
¨ A Workshop on Sustainability and Ca-
capacity-building interventions in the
pacity Building hosted by PLAN Inter-
health sector
national in May 2001, in Dakar, Senegal;
The Guide was developed based on a request and
from the United States Agency for Interna- ¨ MEASURE Program Technical Assis-
tional Development under the MEASURE tance to NGO Networks for Health.
Evaluation Project.

Many readers of this guide may not be aware From the discussion that follows on the con-
that there is a lack of widespread experience cept of capacity building and capacity meas-
in the field of capacity-building M&E in the urement techniques readers will come to un-
health sector. Capacity-building programs derstand why this guide is neither prescriptive
proliferate. Yet, methods for testing and nor exhaustive. Standardized approaches to
tracking their results are rare. We have there- monitoring and evaluating capacity-building
fore based the advice in this guide mainly on interventions are not found because of the
lessons learned from current practices in ca- wide variety of circumstances in which ca-
pacity assessment (see Table 1 for discussion pacity building takes place. Capacity building
of the differences between assessment and has been applied to actions as distinct as pol-
M&E). Sources include: a review of the state icy formulation, supplying basic health com-
of the art of capacity measurement (Brown, modities, and identifying danger signs of

About This Guide 1


malnutrition. In short, capacity building de- The heart of the Guide is found in Part 3,
mands adaptation to its context and capacity- which suggests a 6-step approach to devel-
building evaluation techniques must reflect oping an M&E Plan for Capacity-building
this potential variation. The Guide acknowl- that centers on the process of capacity map-
edges this and other challenges by providing a ping (&). Mapping involves the construction
link between the theoretical and practical as- of a visual framework that helps the evaluator
pects of capacity measurement in the health understand relationships (or assumed relation-
sector and offering an approach to monitoring ships) among the many factors that contribute
and evaluation that is relevant in a variety of to or detract from capacity and, ultimately,
settings. performance. Mapping can be used to identify
untapped, constrained, or missing elements of
It is also important to keep in mind that the capacity. It also can be used to guide inter-
monitoring and evaluation of capacity build- vention choices and to build a monitoring and
ing, while singled out for discussion in this evaluation framework. Part 3 also comments
document, is normally part of an overall plan on indicator selection for M&E and practical
or system for monitoring and evaluating a lessons from field experience, as well as
health program or health sector intervention. methods and data sources, and dissemination
This guide should therefore be used as a tool of results. The indicators and tools referenced
for orienting planners to capacity measure- in this section are provided as examples to
ment in the context of developing a project- stimulate thinking and discussion about ca-
level or overall program-level performance- pacity-building and M&E strategies rather
monitoring plan (particularly programs where than as prescribed approaches.
sustainability and scaling-up are a central
concern). As such, it will aid the process of Part 4 concludes the Guide with a summary
thinking through the role capacity and capac- and checklist for developing a capacity-
ity measurement play in improving perform- building M&E plan. Annexes contain details
ance. of M&E approaches and a summary of Web-
based resources on capacity-building M&E.
The Glossary at the end of the Guide explains
Structure of the Guide many of the technical words and jargon used
in the field of monitoring and evaluation. In
Part 1 of the Guide briefly discusses attrib- the text they are marked with the following
utes of capacity and capacity building, and symbol: &
how these attributes influence M&E ap-
proaches.

Part 2 introduces a series of conceptual


frameworks for understanding the role of ca-
pacity in the health sector and illustrates pos-
sible capacity variables (&) at each level of
the health system.

2
Introduction

Over the last decade, capacity building has measurement tools applicable to every capac-
become as central to the business of develop- ity-building experience.
ing health systems in lesser-developed coun-
tries as providing financial resources and ap- Many of these challenges have also discour-
plying the latest science. Capacity is believed aged widespread testing of methods of capac-
to contribute directly to improving perform- ity-building monitoring and evaluation. The
ance in the health sector, and is thought to extent of experience is so limited that, at this
play an important role in sustaining adequate stage, capacity measurement is considered to
performance over time. Despite increased be an art rather than a science. Evaluators
attention to capacity, experience in gauging must therefore approach M&E of capacity-
the effectiveness of capacity-building inter- building interventions with a willingness to
ventions in the health sector is still limited. test strategies and share what they have
Unlike other aspects of health-related moni- learned in order to build a body of theory and
toring and evaluation (M&E), capacity meas- practice.
urement is not supported by a comprehensive
history of theory and practice. While methods Despite the conceptual and practical chal-
for monitoring and evaluating health service lenges of tackling capacity measurement,
coverage, access, and quality are well ad- there are a number of reasons to put energy
vanced, there are few tried and true ap- and time into developing a sound approach to
proaches for capturing “the interim state or monitoring and evaluation of capacity-
process that reflects the ability to achieve and building interventions. The most significant
sustain coverage, access, and quality over reason is that measurement is an important
time” (Brown, LaFond, and Macintyre, 2001). part of achieving capacity-building and per-
Thus, capacity measurement in the health formance goals. Monitoring and evaluation
sector is both new and experimental. can help health program professionals under-
stand the relationship between capacity-
There are intrinsic challenges to measuring building interventions, capacity and perform-
capacity that are reflected in the concept and ance, and to focus strategies used for im-
role of capacity itself. For example, capacity proving performance. Specifically, monitor-
derives its relevance from the contribution it ing and evaluation can help answer a range of
makes to performance. There are endless ar- questions about
eas where performance is required in the
health sector, and an equally wide range of ¨ the process of capacity change (how ca-
possible capacity variables that influence per- pacity building takes place),
formance. In addition, contextual factors (or ¨ capacity as an intermediate step toward
factors outside the control of most health performance (what elements of capacity
sector actors &) can have a strong influence are needed to ensure adequate perform-
on capacity or the desired outcome of capac- ance), and
ity-building intervention. These and other ¨ capacity as an outcome (whether capacity
characteristics of capacity and capacity building has improved capacity)
building explain why there are no gold stan-
dards for capacity-building M&E. There is no
short list of valid indicators of capacity in the
health sector, nor are there standardized

Introduction 3
Table 1: The Use of Assessment vs. M&E in Capacity-Building Intervention
Capacity Assessment Capacity Monitoring and Evaluation
q Purpose: diagnostic or descriptive; defines q Purpose: predictive; for accountability or
constraints comparisons; gauges results
q Measures gap between actual and desired q Measures results or progress toward de-
performance sired results
q Findings are used for internal purposes q Findings are used for internal and external
(design and planning) purposes (management; accountability)
q One time measurement q Often uses repeat measurement
q Action oriented q Action, analysis and accountability ori-
ented
q Looks broadly at existing situation q Uses conceptual frameworks to discern
relationships between variables

In this guide, when we talk about monitoring ance improvement as a reference for gauging
and evaluation of capacity building or progress. As such, it guides program man-
capacity development, we are mainly inter- agement as well as informs funding agencies
ested in the last question, that is, measuring about the results of capacity-building invest-
changes in capacity and linking them (directly ments. A final aspect of M&E (as opposed to
or indirectly) to capacity-building interven- diagnosis or assessment) is the use of con-
tions. ceptual frameworks that make assumptions
about the relationship between different vari-
ables that influence capacity and perform-
Defining Capacity-Building ance. Table 1 describes many of the differ-
Monitoring and Evaluation ences between capacity assessment and M&E.

Most capacity measurement experience to


date has emphasized capacity assessment
“In the evaluation of capacity devel-
rather than M&E (Brown, LaFond, and Mac- opment, the impact metaphor should
intyre, 2001). Assessment normally takes be avoided. The militaristic impact
place at the beginning of an intervention as metaphor fails to capture the essen-
part of an organizational diagnosis or forma- tial features of capacity development,
tive design process. Evaluators can learn a which is a process of change and
great deal from capacity assessment tools (as growth.” (Horton, 2002).
we have in developing this guide). However,
it is worth noting that while capacity assess-
ment is an important first step in planning a Capacity monitoring normally would be used
capacity-building intervention, capacity- to understand the effectiveness and efficiency
building M&E differs from assessment by of a capacity-building intervention during
virtue of its explicit focus on measuring implementation (i.e., is capacity improving
change. Capacity-building monitoring and and at what cost?), to contribute to strategic or
evaluation tracks or identifies changes in ca- operational decisions related to capacity
pacity that take place in the course of a ca- building, or to enable a periodic look at a pro-
pacity-building intervention. It uses stated gram or system. Capacity evaluation is nor-
objectives for capacity building and perform- mally more complex than monitoring, and is

4
conducted to gain understanding of the rela- process of building capacity itself. Program
tionship between capacity-building interven- managers often use capacity assessment tools
tions and capacity outcomes, or the links be- to raise awareness about capacity problems,
tween capacity and performance variables. stimulate commitment to improving capacity
The term “impact evaluation” & is not ap- among stakeholders, and for setting self-
propriate or useful in the context of capacity- determined benchmarks. The focus is internal.
building M&E because of the difficulty of In practice, capacity-building M&E is often
quantifying many elements of capacity and encouraged (or required) by external
attributing capacity change to any single in- stakeholders to be used mainly for account-
tervention or even a range of interventions. ability. Defining the purpose of M&E is
therefore not always easy for managers and
evaluators. The discussion that follows con-
Capacity-Building M&E Has Many siders the pros and cons of these various ap-
Roles proaches and informs critical measurement
choices. It begins with a discussion of the
A final introductory observation relates to the rationale for capacity-building M&E and ex-
role that measurement plays in a capacity- plores the concept of capacity and its role in
building intervention. Many experienced ca- improving performance.
pacity-building practitioners feel that capacity
measurement cannot be separated from the

Introduction 5
Part 1 Concepts, Definitions, and Attributes of Capacity and
Capacity Building

Why Build Capacity? and other areas to ensure performance goals


are met.
In the context of results-based programming,
resources are invested in different aspects of
the health sector with the ultimate aim of en-
hancing health system performance and im- Useful Definitions &
proving the health of populations. Translating It is useful to start with definitions of capac-
these resources into sustained performance ity, capacity building and performance to
often requires new or improved capabilities in guide measurement efforts and M&E plan-
individuals and organizations (including ning. This guide returns frequently to such
communities) operating in the health sector. issues because meaningful capacity measure-
Capacity represents the potential for using ment depends on clear understanding of ca-
resources effectively and maintaining gains in pacity and its role in the health sector.
performance with gradually reduced levels of
external support. Capacity is “the ability to carry out stated
objectives” (Goodman et al, 1998). It has also
been described as the “stock of resources”
What is Capacity Building? available to an organization or system as well
as the actions that transform those resources
Used alone, the term capacity building is in- into performance (Moore, Brown, and Honan,
tangible and vague. What constitutes capacity 2001).
building in practice can vary enormously, and
the concept continues to develop as field ex- Capacity building (or capacity development)
perience grows. In the early days of capacity- is a process that improves the ability of a
building intervention, many practitioners person, group, organization, or system to
equated capacity building with training. If meet objectives or to perform better.2
there was a gap in performance, the solution
was often to hold a workshop to “retrain” or Performance is a result or set of results that
“refocus” the individuals whose performance represent productivity and competence re-
was faltering. Organizational development lated to an established objective, goal or
experts and field-level capacity-building ef- standard.
forts inform us today that individual skills are
only part of the complex mixture of elements
that constitute capacity to perform a certain
function or groups of functions effectively
Attributes of Capacity and Capacity
and consistently over time. Individual health Building
workers, no matter how skilled, are unlikely The definitions of capacity and capacity
to deliver essential health and family planning building above reflect certain attributes of
services effectively without adequate supplies each concept that inform this guide’s ap-
and equipment, proper motivation and sup- proach to monitoring and evaluation. These
port, a good relationship with the community attributes are as follows:
served by the health center, and so on. Ca-
pacity building may be required in all of these 2
For other definitions of capacity building, see Brown,
LaFond and Macintyre, 2001.

Concepts, Definitions, and Attributes of Capacity and Capacity Building 7


Box 2: Measuring the Effectiveness of Capacity Building in Training:
PRIME I

The PRIME I project provided technical assistance to strengthen the capacity of


local institutions in developing countries to train health personnel for reproductive
health (RH) service delivery. A key M&E strategy for this project was development
of a capacity index specific to the features of RH training institutions. The index
was tested in 14 countries and later revised and applied to monitor the results of
PRIME’s capacity building in training activities. Detailed reports of these evaluations
in El Salvador, Dominican Republic, Ghana, and other countries are available from the
PRIME II project (Catotti, 1999; Ampomah, 2000; Luoma, 2000; www.prime2.org).

¨ Capacity building can be defined only through monitoring and evaluation. The dy-
in terms of a specific objective or goal. namic nature of capacity is often a reflection
In the health sector, capacity does not exist of the many different forces that influence its
for its own sake. Health planners and manag- development or decline.
ers are concerned with capacity because it
enables performance. For example, a health ¨ Capacity building is multidimensional.
facility that experiences regular stock-outs of Capacity building can be described in terms
pharmaceuticals might require additional ca- of levels. In the health sector, capacity is re-
pacity in financial planning or supplies man- quired at four different levels: health system,3
agement (i.e., interventions that are specific to organization, health personnel, and commu-
the particular performance goal of commodity nity. Yet, to date, most capacity-building ex-
supply). It follows that a capacity develop- perience and measurement have focused on
ment strategy for improving pharmaceutical organizational and health personnel capacity.
supply would call for a different approach In practice, capacity at one level is often in-
than one aimed at strengthening community fluenced by actions at other levels. A single
involvement in health. The link between ca- missing aspect of capacity rarely explains
pacity and performance, therefore, serves as performance failures. The PRIME project
the guide for both programming and evalua- (Box 2), for example, constructed an index of
tion of capacity-building interventions. Im- the capacity of training institutions that in-
proved performance, in turn, is a good indi- cluded 13 critical elements, ranging from po-
cator of success in capacity development. litical support for training in reproductive
health to community involvement in training
¨ Capacity (and capacity building) are (Fort, 1999).
dynamic and volatile.
Capacity can be perceived as a moving target. Analysis of capacity levels through measure-
At any given time, capacity can improve or ment encourages evaluators to think in terms
decline. It often develops in stages that indi- of complex, multifaceted systems. Connec-
cate improved readiness to influence perform- tions and forces within a system are critical to
ance (Goodman et al., 1998). Capacity build- 3
ing, therefore, is an ongoing process (the de- Some have labeled this level institutional develop-
ment (Kotellos, 1998; INTRAC, 1998), while others
velopment of abilities), whose stages can be use the terms organization and institution inter-
measured as “development outcomes” & changeably. To avoid confusion, we have adopted the
term system.

8
understanding constraints to capacity and how in health sector financing, and capacity
to overcome them. Paying too much attention gradually eroded to a very low level (LaFond,
to one part of the organization or system may 1995).
limit results at the overall organization or
system level and fail to improve overall per-
formance (Morgan, 1997). Take the example Capacity Building Is Behavior
of delivering immunization services at the Change
organization level. The effectiveness of this
service depends on elements that go beyond In addition to these attributes, current thinking
the capacity of the facility alone. The Cold about capacity building reflects two ways of
Chain & must function from the central level capturing the changes expected as a result of
to the facility to ensure vaccine viability. intervention. Traditional approaches to ca-
Civil service norms, regulations, and salary pacity building concentrate on the internal
levels can influence health worker motivation functioning of organizations and systems
and acceptance of the value of immunization (structures, strategies, staff, and skills).
among caregivers and encouragement from Morgan (1997), however, notes the necessity
community leaders can affect service utiliza- of considering the “macro” aspect of capacity
tion. If performance falters (i.e., coverage building that relates to the behavior and op-
declines), it may be the result of limited ca- erations of groups of organizations or indi-
pacity at the facility or other levels. An viduals and their role in wider systems (such
evaluation framework should consider all as the role of public sector health systems,
these variables, although it may focus meas- ministries of health, or district-level health
urement efforts on a smaller number of them. units in rural health improvement). In general,
there is more experience working on and
¨ Capacity depends on the context. measuring capacity at the micro level than at
Contextual factors or elements of the external the macro level.
environment influence capacity directly and
indirectly. Contextual influences include cul- Taking both a micro and macro look at ca-
tural, social, economic, political, legal, and pacity building suggests that capacity devel-
environmental variables. The influence of opment goes beyond a simple technical inter-
these factors may be crucial to the success of vention. It is to a great extent focused on in-
capacity building, yet they are often difficult ducing behavior change, a process that in-
to control or measure. For example, Sierra volves learning, moderating attitudes, and
Leone’s Ministry of Health (MOH) may have possibly adopting new values at individual,
the capacity to deliver childhood immuniza- organization, and system levels. Therefore,
tion services. However, frequent political in- the focus of capacity-building interventions
stability in the country can challenge that ca- and M&E must capture related conditions and
pacity and reduce performance (e.g., immuni- concepts such as motivation, culture, and
zation coverage) dramatically. Taking a more commitment, as well as changes in resource
general example, the stagnation and decline of availability, skill levels, and management
economic growth that occurred in Africa in structure (Morgan, 1997; James, 2001). Ex-
the 1980s severely undermined public sector amples of different types of organizational
capacity to meet recurrent costs for salaries capacities are found in Box 3.
and supply of basic health commodities. Even
well-established health systems, such as
Ghana’s, were unable to withstand the decline

Concepts, Definitions, and Attributes of Capacity and Capacity Building 9


Box 3: Examples of Organizational Capacities

Six General Areas of Capacity in the CSTS Institutional Sustainability Assessment


1. Strategic management practices
2. Organizational learning
3. Use and management of technical knowledge and skills
4. Financial resource management
5. Human resource management
6. Administrative infrastructure and procedures

Source: Sarriot, 2002a

Structural or technical
· The organization has effective program recruitment, development, and retention of staff
that it can perform its critical functions adequately. It must have a basic set of competen-
cies that can enable it to cope with its workload and environment.
· The organization has a structure, technology, and set of procedures that enable staff to
carry out the critical functions.
· The organization has the ability, resources, and autonomy to focus on a manageable set of
objectives over a reasonable period of time. Its goals are reasonably clear, accepted, and
achievable.
· The organization can alter its structure and functioning by including new actors, new part-
nerships, decentralization, delegation, creation of new organization, downsizing, privatiza-
tion, participation, devolution, and changing responsibilities for government.

Behavioral
· The organization understands the implications of its experiences and can change its collec-
tive behavior in line with this understanding. It can learn and adapt. It has a self-renewing
capacity.
· The organization can form productive relationships with outside groups and organizations as
part of a broader effort to achieve its objectives. It can manage these relationships for
both its own gain and that of its partners.
· The organization has the ability to legitimize its existence. It must be able to persuade key
external stakeholders of the value of supporting its continued functioning. It has an identity
that is accepted internally and externally, and the loyalty of its clients, customers, and
stakeholders gives it protection and resources.
· The organization has a culture, a set of values, and organizational motivation that values and
rewards performance.
· The work community has a population of groups and organizations that is sufficient to carry
out the tasks and services needed to implement such critical functions as analysis, produc-
tion, mediation, communication, networking, fund-raising, and so on.

Source: Morgan, 1997

10
Why Monitor and Evaluate Capacity based management of programs where capac-
Building? ity building is part of the overall strategy for
improving performance.
Given the nature of capacity development—
the volatility of capacity, its many levels, and
links to performance—some authors describe
What Is Different about M&E of
capacity building as a high-risk investment
(UNICEF, 1999). Yet, most development Capacity Building?
organizations agree that facilitating growth in Traditionally, monitoring and evaluation fo-
capacity among local partners’ systems, or- cuses more on measuring performance and
ganizations, and communities is key to the less on the way performance is achieved or
success of social development overall. As sustained. In contrast, capacity-building M&E
such, all stakeholders need dependable meth- focuses fundamentally on processes (e.g.,
ods for answering such questions as building alliances, mobilizing communities,
¨ What capacity exists now, and how does it decentralized planning, learning) and other
affect performance? qualitative aspects of individual or organiza-
¨ What improvements in capacity or new tional change (e.g., motivation to perform)
kinds of capacity are required? that contribute to better performance. Conse-
¨ Is capacity being built? Is the capacity- quently, M&E of capacity building often
building intervention focused on the right seeks to capture actions or results that often
elements? are not easily measured.
¨ What has been learned about capacity-
building strategies? That said, results of capacity building are as
¨ How does capacity contribute to important as processes. In capacity-building
sustainability? intervention, the process and result of capac-
ity building becomes the “intermediate out-
In addition, there is value in not restricting come” that is expected to lead eventually to
monitoring and evaluation of health and de- improved and sustained performance. Ex-
velopment interventions to a few important ploring the links between changes in capacity
outcomes or results (i.e., quality, coverage, and changes in performance is therefore key.
and health status). Organizations and systems However, it often involves considerable
produce many different and critical effects. speculation about the capacity needed to
For strategic purposes, and to manage change achieve those goals. One of the main gaps in
in programs, organizations and systems ef- the knowledge base that informs capacity
fectively, regular information on a number of measurement is the lack of common under-
operational indicators is required (Moore, standing of the relationship between capacity
Brown and Honan, 2001). A well-defined and performance. Little is known about what
monitoring and evaluation strategy will help elements or combinations of elements of ca-
make sense of these many facets of capacity pacity are critical to performance. Moreover,
and performance. Monitoring and evaluation there is considerable variation in what con-
should help local practitioners and their ex- stitutes “adequate” performance.
ternal partners to think strategically about
capacity development and to learn, through
practice, what works under different circum-
stances. At the same time, systematic meas-
urement of capacity contributes to results-

Concepts, Definitions, and Attributes of Capacity and Capacity Building 11


Implications for Capacity-Building the use of capacity-building M&E for ac-
M&E countability, predicting performance, or
making comparisons between different inter-
Clearly, the attributes of capacity and capacity ventions or sites (common reasons for con-
building noted above have implications for ducting evaluation). This theme surfaces often
monitoring and evaluation. Broadening the in the discussion of capacity-building M&E,
concept of capacity building beyond technical and will be addressed in Part 3 of this guide.
skills and resources and thinking about ca-
pacity building in terms of multiple levels and
influences helps planners and evaluators to
Summary for Managers and
hypothesize about what aspects of capacity
are critical to performance and to define entry Evaluators
points for targeting capacity-building inter- ¨ Capacity is a pre-condition for perform-
ventions. A measurement approach should ance. Capacity building is used to improve
also reflect a clear understanding of the inter- performance in a variety of ways and
action among different aspects of capacity situations.
and how they work (or fail to) work together,
particularly with respect to individual and ¨ Capacity-building M&E is normally part
organizational behavior. These types of vari- of an overall plan or system for monitor-
ables may be represented by indicators in an ing and evaluating a health program or
evaluation plan, but may require additional health sector intervention.
interpretation to ensure a complete grasp of
capacity and its role in improving perform-
ance. ¨ There are no standardized approaches for
capacity-building M&E because of the
As noted in the Introduction, it is also impor- wide variety of circumstances in which
tant to keep in mind the conventional wisdom capacity building takes place. There is no
about how to monitor and evaluate capacity. short list of valid indicators of capacity in
Conventional wisdom notes that it is not pro- the health sector, nor are there standard-
ductive to separate measurement practices ized measurement tools applicable to
from capacity building itself (Morgan, 1997; every capacity-building experience.
Horton, 2001; Earl et al, 2001). Because ca-
pacity-building M&E focuses on behavior ¨ Monitoring and evaluation should help
change, the success of capacity development local practitioners and their external part-
is often directly related to the extent of own- ners to think strategically about capacity
ership and commitment to the process on the development and to learn, through prac-
part of the participants. This commitment tice, what works under different circum-
includes, in some cases, ownership of the stances. At the same time, systematic
design, procedures, and reporting of moni- measurement of capacity contributes to
toring and evaluation activities. Applied in results-based management of programs
this way, monitoring and evaluation of capac- where capacity building is part of the
ity can become a key strategy for improving overall strategy for improving perform-
performance. However, many of the M&E ance.
methods that promote ownership (i.e., involve
self-evaluation and relying on respondents’
perceptions) may also affect the validity of
findings. Specifically, they may compromise

12
¨ Capacity building in the health sector can ganizations. Thus, capacity-building M&E
be described and measured in terms of must capture conditions and concepts such
four levels: health system, organization, as motivation, culture, and commitment,
health personnel, and community. Capac- as well as changes in resource availability,
ity at one level can be influenced by ac- skill levels, and management structure.
tions at other levels.
¨ Any strategy monitoring capacity should
¨ Contextual factors or elements of the ex- reflect a clear understanding of the inter-
ternal environment influence capacity di- action among different aspects of capacity
rectly and indirectly. and how they work (or fail to work) to-
gether.
¨ Capacity development goes beyond a
simple technical intervention, focusing on
behavior change in individuals and or-

Concepts, Definitions, and Attributes of Capacity and Capacity Building 13


Part 2 Understanding the Role of Capacity in the Health Sector:
Introducing a Conceptual Framework

The first step in developing a vision of capac- Overview Framework: The Role of
ity development, and a plan to measure it, is Capacity in the Health Sector
to understand the role capacity plays in the
health sector in developing countries. What Health system performance depends on ca-
are the expectations and assumptions sur- pacity. Figure 1 provides an overview of that
rounding capacity and its relationship to per- relationship and specifies four levels where
formance and health outcomes? Clear think- capacity is needed to ensure performance:
ing about these variables helps planners de- system, organization, health personnel, and
fine realistic objectives for capacity-building individual/community. The diagram suggests
interventions and express desired capacity that capacity contributes to performance at all
outcomes explicitly and precisely. Evaluators levels, and capacity at each level collectively
must rely on these parameters of capacity enables overall health system performance.
building in order to develop a capacity-
building M&E plan.
“Understanding capacity and
performance of individuals and
The following series of conceptual frame-
organizations demands careful
works are provided as a reference to help consideration of their role in
planners and evaluators develop their own larger systems, and their rela-
vision of the role capacity (and capacity tionships within those systems”
building) plays in the health sector. We have (Morgan, 1997).
found that directed discussion using these
types of frameworks prior to M&E planning
can stimulate strategic thinking within project Figure 1 also implies that capacity plays a
or work teams, clarify individual and collec- role in sustaining health system performance.
tive expectations and thereby improve capac- If health system performance remains ade-
ity-building M&E. Figure 1 – The Overview quate over time (supported by consistent ca-
– illustrates the critical role capacity plays in pacity), performance is said to be sustained.
influencing and sustaining performance in the Although few health systems in developing
health sector. It takes a system-wide view of countries can boast this accomplishment, the
capacity, including all possible levels where underlying aim of capacity development
capacity building might take place. The four should be a sustained change in resources or
other frameworks (Figures 2-5) take capacity behavior that leads to improved and sustained
at each level and break it down into defined performance. The goal is not short-term gain
components: inputs, processes, outputs, and but a lasting or robust change in ways of do-
outcomes (See Table 2). In breaking down ing business that becomes imbedded in the
capacity at each level, the frameworks pro- system or organization itself.
vide a starting point for identifying the key
variables that influence capacity and perform-
ance at that level.

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 15
Figure 1. Overview of Capacity in the Health Sector
External Environment

Capacity Levels Performance Sustainability

Health System Health System T


Performance
I
Organization Sustainable
Organizational M
Performance Health
Health E System
a
Program
Performance
Personnel Personnel
Performance

Improved
Health
Status

Sustained
Individual/Community Individual/Community Individual/Community
Capacity Behavior Change Behavior Change

External Environment

16
At the center of the framework is the ultimate Capacity at a Single Level
goal of capacity building in the health sector:
improved health status. Capacity does not The four levels of capacity are detailed further
directly influence health status but contributes in the following related frameworks (Figures
to it through its link to performance at system, 2-5).
organization and health personnel levels. In
this illustration, the health system interacts These conceptual frameworks take a broad
with individuals or groups of individuals (e.g., look at capacity at one level to illustrate many
the community) to influence health status. of the potential factors that might come to-
Individuals and communities contribute to gether to influence capacity and performance.
health system capacity by interacting with The purpose of these frameworks is to show
providers and organizations (receiving care, how capacity can be broken down at each
determining priorities, or providing resources) level into inputs, processes, outputs, and out-
and to health system performance by using comes in order to
health services. In addition, individuals and
communities can improve their health status ¨ identify the different factors that contrib-
independent of the health system by promot- ute to capacity, and performance
ing and adopting preventive measures, such as ¨ hypothesize about the potential relation-
regular hand washing, not smoking, or eating ships among these factors within a single
well. Improvements in individual and com- level
munity capacity should result in sustained
behavior change over time, representing this Conceptual frameworks like these differ from
level’s contribution to sustained health system logical or strategic frameworks in that they do
performance and improved health status. not reflect the linear logic of a particular ca-
pacity-building intervention, and its presumed
At the perimeter of Figure 1 we mark the in- effect on capacity outcomes. Rather, they
fluence of environmental or contextual fac- show the range of all possible variables that
tors, including cultural, social, economic, might influence capacity and performance. In
political, legal, and environmental variables this way they help planners at the early de-
that influence capacity and performance at all sign stages to determine the scope and focus
four levels (Africa Bureau, 1999; Horton, of a capacity-building intervention, and
2001; James 2001). The obvious importance evaluators to design valid measures for de-
of these factors for improving and sustaining termining the success of those interventions.
both capacity and performance suggests that Conceptual frameworks can become gradu-
special efforts are needed for tracking their ally more specific as decisions are made about
status overtime. In this guide, we focus capacity-building interventions and the ca-
mainly on variables that donors, governments, pacity and performance changes expected
private agencies, and individuals can influ- from them.
ence through health sector interventions.
However, we also encourage evaluators to
identify and monitor key contextual variables
and examine their relationship to program
outcomes.

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 17
Defining Variables Related to tors of performance at this level.5 The frame-
Capacity and Performance work includes a range of possible capacity
inputs, processes, outputs and outcomes that
Capacity inputs represent the resources (hu- contribute to performance at this level.
man, financial, material, etc.) that contribute
to capacity and performance. Processes repre- The system level is a complex area in which
sent the activities or behaviors at each capac- to define or address capacity development or
ity level that transform resources (inputs) into to assess changes in capacity resulting from
capacity outputs and outcomes. Capacity out- external or internal intervention. Despite the
puts and outcomes are the results of inputs use of an inputs-process-outputs-outcomes
and processes, and indicate products (outputs) framework, in practice, relationships among
and “an ability to carry out stated objectives” elements of capacity are not perfectly linear.
(outcomes). In many cases, capacity outcomes Change (or the lack of it) in capacity results
are expressed as knowledge, skills and be- from multiple influences, some of which can
havior. Performance is the expected result of be unexpected (Sarriot, 2002a). Contextual
capacity (a “stock of resources”) and the envi- factors such as political and economic stabil-
ronment, the final link in the hypothesized ity can also play a dominant yet poorly under-
chain of causality. Performance is defined as stood role in ensuring system capacity. Good
results that represent productivity or compe- examples come from health sector reform
tence related to an established objective, goal, activities that seek to improve national health
or standard. sector performance by changing sector priori-
ties, laws, organizational structures, and fi-
System Level nancing arrangements. For instance, the actual
Figure 2 refers to the health system. It in- results of legal reform in Zambia were
cludes the resources, actors, and institutions achieved but not well communicated to health
related to the financing, regulation, and provi- workers, which led to internal resistance to
sion of health actions (Murray and Frenk, “delinking” or separating health workers from
1999; WHO 2000).4 The system is seen as a the civil service (Lake et al., 2000). Despite
collection of institutions or organizations, addressing key constraints such as laws or
plus the personnel in those organizations regulations, capacity to manage human re-
working together to deliver health care and/or sources more effectively did not emerge as
promote better health. The health system per- planned.
forms certain functions independent of those
performed by the organizations, and person-
nel within it, and therefore possesses its own
capacity that can be assessed over time and
targeted for intervention.

Performance at the health system level is of-


ten defined in terms of access to services,
quality of care, equity, and efficiency, al-
though there are many other possible indica-
5
The World Health Organization proposed new indi-
cators for monitoring health system performance in the
4
A health action is defined as “any set of activities World Health Report 2000, including measures of
whose primary intent is to improve or maintain health” stewardship, financing, resource generation, and serv-
(Murray and Frenk, 1999). ice provision.

18
Figure 2: Health System Capacity
External Environment
Inputs Process Outputs Capacity
Outcomes
Health policy making
Infrastructure
Effective health policies
Enforcement of health
Public/private composition
related laws and Published health policies
of services Accountability
regulations and regulations
(financial and program
Organizational structure transparency)
(public sector)
Health sector strategic Formal and informal P
planning coalitions e
Capacity to assess and
Existing health-related
Resource allocation Sector-wide strategy
cope with internal and r
laws, regulations, and external change
policies
f
Resource generation Increased local financing
Financial self-reliance o
of recurrent costs
Information/
Financial management r
communication Effective monitoring of
systems
Improved human resource
quality of care
m
Human resource availability in rural areas
development and
a
Human resources Responsiveness to client n
management Coordinated donor
needs and demands
Leadership
interventions c
Donor coordination
Efficient/appropriate e
Timely analysis and
Financial resources (public/ resource allocation
Multi-sectoral dissemination of national
private, internal/external)
collaboration health information
Use of information for
History and culture of the strategy and learning
Information coordination
system
& dissemination

External Environment

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 19
Table 2: Capacity and Performance Variables Defined
Set of resources, including health personnel, financial resources, space, policy
orientation, and program service recipients, that are the raw materials that
Input
contribute to capacity at each level (system, organization, health personnel,
and individual/community).
Set of activities, practices, or functions by which the resources are used in
Process
pursuit of the expected results.

Set of products anticipated through the execution of practices, activities, or


Output
functions.

Set of results that represent capacity (an ability to carry out stated objectives),
Outcome
often expected to change as a direct result of capacity-building intervention.

Set of results that represent productivity and competence related to an estab-


Performance lished objective, goal or standard. The four capacity levels together contribute
to overall system-level performance.

Long-term results achieved through improved performance of the health sys-


Impact
tem: sustainable health system and improved health status. Impact measures
are not addressed in capacity-building M&E.

Organization Level could be cooperatives, community develop-


Figure 3 depicts a similar categorization of ment organizations, advocacy groups, infor-
capacity variables at the organization level mal pressure groups, and others. The MOH is
that contribute to organizational performance. a unique organization for conceptualizing
Performance at the organization level might capacity building since it can be a significant
be described in terms of the ability of the or- actor at both the system and organization lev-
ganization to produce goods and services to els. The contextual factors influencing or-
an acceptable standard (e.g., the quality of ganizational capacity are represented at the
care; coverage of the catchment population). perimeter of the diagram and include system
This framework relates to organizations level factors as well as typical political, eco-
whose main function might be health service nomic, cultural, and other variables.
delivery (in the public or private sector) and
those considered to be civil society organiza-
tions (nongovernmental or nonhealth service
agencies). Civil society organizations gener-
ally are not involved in the direct delivery of
health services, but they do influence health
service delivery, policies, and behaviors in
many societies throughout the world. Civil
society organizations of particular importance

20
Figure 3: Health Service and Civil Society Organization Capacity
External Environment
Capacity
Inputs Process Outputs
Outcomes
Strategic and operational Able to assess and cope with
plans internal and external
Strategic and operational change
Infrastructure planning Staff trained and supported
Responsiveness to client
Organizational Human resource management P
Functional management needs and demands
structure and development systems (i.e., supplies e
available, supervision done) Financial self-reliance
Mission Financial management r
Functional financial Stakeholder involvement f
Leadership Logistics/supplies management system (i.e., o
management resources available, costs Regular supply of essential
Financial
contained) commodities/No stock outs
r
resources Research and evaluation m
Functional health Acting and learning with a
Equipment and Coordination with other units information and information
Supplies
communication system n
Resource mobilization (information collected, Ability to monitor service c
Human resources
analyzed and used) quality and correct gaps as e
(technical & IEC needed
managerial)
Functional service delivery
Advocacy systems (i.e., services Able to develop and
History and
available) maintain working
culture of Community relations and relationships with other
organization mobilization Regular IEC and community organizations and groups
mobilization activities

External Environment

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 21
Health Program Personnel Level Inputs such as sufficient funds, space and
Figure 4 presents the health program person- materials for professional development are
nel level. The term health personnel refers to transformed into capacity outcomes through
all those who perform clinical, managerial, processes such as educational and training
advocacy or other work within the health events or other opportunities for improving or
system. In contrast to the system and organi- maintaining health personnel capacity. Ca-
zation levels, comprehensive interventions to pacity outcomes relate to the knowledge,
build and maintain capacity are more com- skills, experience, and motivation resulting
mon at the health personnel level. Ideally, in from inputs and processes. Performance at
each health system there is a plan for pro- this level includes the application of knowl-
ducing and maintaining a cadre of qualified edge and skills in management, health serv-
personnel (personnel with capacity) and pro- ices delivery, training, and other related ac-
viding them with an adequately supportive tivities.
environment in which to perform effectively.
It is less common to find comprehensive or- Individual/Community Level
ganization- and system-level capacity- The final figure, Figure 5, represents the “de-
building plans, although one could argue they mand side” of the equation for capacity
are equally important. building as well as the role individuals and
communities play in shaping health systems
The vast majority of capacity-building inter- and improving health status. In addition to
ventions in the health sector focus on chang- system, organization, and health personnel
ing the skills and behavior of health personnel levels, capacity is required within individual
because managers and providers play a criti- clients and communities to ensure demand for
cal role in ensuring organization and system appropriate services to promote their role in
level capacity and performance. This frame- contributing to or influencing service deliv-
work attempts to tease out some of the key ery, and to encourage the practice of certain
variables at this level that relate directly to behaviors conducive to good health. For ex-
individual health personnel capacity, but we ample, clients’ capacity to demand improved
must acknowledge that organizational context or new services or to engage with health care
is equally important. Organizations and sys- personnel and organizations is vital to health
tems are often responsible for the inputs and system performance and achieving adequate
processes that enable health personnel to per- health status of the population.
form effectively. Thus, there is a significant
overlap between the inputs and processes that
contribute to capacity at the organization and
the health personnel levels. Many of the vari-
ables listed in system and organization level
frameworks also contribute to health person-
nel capacity.

22
Figure 4: Health Program Personnel Capacity
External Environment

Process Outputs Capacity


Inputs
Outcomes
Financial resources
(i.e., salaries, benefits,
incentives)
Pre-service and in-service Staff trained/retrained
Physical resources training events (training of as required
• venues trainers and trainees) P
• materials Trainers
• supplies trained/retrained as Knowledge and skills e
Training events for
• equipment managers (including
required of trainees r
supervisors) f
National/organizational Managers Trainers and trainees
training policies, plans, trained/retrained as continue to gain o
Staff performance
and guidelines evaluations
required experience r
m
Up-to-date information Supervision received Motivated health
on appropriate clinical
Experiential learning
personnel a
opportunities
and managerial Professional or peer n
practices Professional networking support networks c
Curricula Access to information e

Human resources

External Environment

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 23
Figure 5: Individual/Community Capacity
External Environment
Inputs Process Outputs Capacity
Outcomes
Individual/family
Education
Income
Family history
Recognition of
Sex
symptoms and
Perceptions of need/risk
danger signs and
Willingness to seek care
actions needed
P
Ability to pay Needs identification and
problem solving
Recognition of e
need for services
Exposure to programs/services
Ability to articulate r
Collaboration needs and demands
Intention to use f
Past experiences with health
services and prevention
Achieving consensus services Knowledge of o
prevention behavior r
practices Participation in
Critical reflection
community health Community support m
Utilization-enhancing activities
(e.g., IEC, accessible services)
Securing resources committees for prevention a
behaviors n
Negotiation Community plans
Community dimensions c
•Community history Communication Community support
•Citizen participation for community- e
•Cohesiveness based health care
•Leadership
•Material and financial resources Community-based
(internal and external to community) mobilization and
•Social and interorganizational empowerment for
networks interacting with
•Communication channels health system
•Values
•Skills
External Environment

24
Here the individual/community level repre- nel, and organizations cannot function with-
sents all those who could benefit from and out health personnel. Without individual users
participate in the health care system; thus it of health services, the other levels cannot be-
includes all current and potential clients of the gin to perform effectively. Going beyond one-
services offered and the communities in dimensional diagrams to understand the dy-
which they live. The inclusion of individual namics of capacity building at each level and
and community capacity in this framework between levels will guide the development of
represents a departure from conventional M&E strategies and techniques.
thinking on capacity in the health sector. Ref-
erences to community capacity are found For example, the processes listed at the sys-
mostly in literature on community tem level in practice are often activities car-
empowerment and strategies for improving ried out by the MOH with support from do-
community mobilization and participation nors and in collaboration with other actors in
(Goodman et al. 1998; Israel et al, 1994; Is- the health sector (e.g., NGOs, private compa-
rael et al. 1998; and Eng and Parker, 1994). nies). There is a clear overlap between system
The inputs in this framework represent the and organizational capacity since the capacity
resources available to individuals and com- of the system to carry out certain functions
munities. They include individual/family may depend directly on the capacity of the
factors, community factors, and factors out- MOH to play its organizational role effec-
side the immediate influence of the commu- tively. An M&E plan should attempt to
nity, such as exposure to health and education monitor changes at both levels to explain ca-
programs. Processes explain how individuals pacity development (or lack of it) well.
and communities use their resources to act in
support of their own capacity development. The overview diagram that describes the rela-
Capacity outcomes relate to knowledge, mo- tionship between capacity, performance and
tivation, skills and behavior that support indi- sustainability also suggests a logical progres-
vidual and the community’s health and well- sion from capacity to performance to sus-
being. Performance is the actual behavior on tained performance, when in fact both capac-
the part of individuals or communities that ity and performance can improve or decline in
might include interaction with the health sys- uncoordinated or illogical ways. Because ca-
tem (participation or advocacy), as well as pacity is a fluid notion that responds to many
behavior that directly influences health out- influences, linear frameworks, often used in
comes: utilization of health services, self research and evaluation, are sometimes con-
treatment, compliance, prevention behavior. sidered too mechanical for monitoring and
evaluating capacity. Cause and effect chains
related to capacity are seldom linear, sug-
Using These Conceptual gesting the need to break out of a rigid, in-
Frameworks flexible way of thinking.

While it is useful to separate levels of capac- Figures 2 – 5 suggest one way to look beyond
ity for facilitating M&E planning, these levels the linear representation of capacity variables
are clearly interdependent, as shown in the by depicting the process of capacity develop-
nesting of health personnel and organization ment as a cycle. Once one stage of capacity
levels in the system level, and the arrows development is achieved, capacity outcomes
connecting individuals/communities to the become the new inputs and processes for the
health system and its parts. A health system is next stage of improvement. Indicators in this
made up of organizations and health person- sense become relative, in that an indicator of

Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 25
capacity expressed as an outcome might be ¨ The conceptual frameworks (Figures 1 –
described as another type of variable as ca- 5) illustrate the critical role capacity plays
pacity improves or declines. in influencing and sustaining performance
in the health sector, including the four
This guide recommends the development of levels where capacity is needed in the
conceptual frameworks as a useful process for health sector: system, organization, health
thinking through a capacity-building inter- personnel and individual/community.
vention strategy, clarifying expectations of
stakeholders and in hypothesizing the vari-
ables that are considered important to pro- ¨ Figures 2 – 5 depict capacity at each level.
gram results in a specific context. However, The purpose of these frameworks is to
these tools should be used along with strate- show how capacity can be broken down
gies such as creative thinking, revisiting as- into inputs, processes, outputs, and out-
sumptions, and reflecting on results with comes in order to identify the different
stakeholders when conducting capacity- factors that contribute to capacity and per-
building M&E. Part Three of the Guide will formance, and hypothesize about the po-
elaborate on the use of frameworks or maps in tential relationships among these factors
M&E and discuss these and other strategies within a single level.
for understanding changes in capacity and
their relationship to performance. ¨ The frameworks provide a starting point
for identifying the key variables that in-
fluence capacity and performance at that
level, and will help evaluators define ca-
Summary for Managers and
pacity variables to track in the M&E plan.
Evaluators
¨ The first step in developing a vision of
capacity development, and a plan to
measure it, is to understand the role ca-
pacity plays in the health sector in devel-
oping countries.

¨ We have found that directed discussion


using conceptual frameworks or maps
prior to M&E planning can stimulate
strategic thinking within project or work
teams and clarify individual and collective
expectations, and thereby improve capac-
ity-building M&E.

26
Part 3 Monitoring and Evaluating Capacity-Building Interventions

Part 2 described a generic conceptual frame- An M&E plan for capacity building states
work for understanding the role of capacity in what is to be evaluated, what evidence is
the health sector and suggested possible ca- needed to answer key evaluation questions,
pacity variables for each level. This part pres- how the data will be used, who will use the
ents the six steps for developing a monitoring data, and for what purpose. The intended re-
and evaluation plan for a specific capacity- sult of the planning steps is a clearly defined
building intervention. At the heart of this pro- guideline for data collection, analysis, and use
cess is the development of a “capacity map” for assessing the effectiveness of a capacity-
or conceptual framework that applies to the building intervention. In general, capacity-
particular capacity-building intervention un- building M&E plans contain the following:
der study. The six steps are listed in Box 4.
· a conceptual framework
Ideally an M&E plan should be formulated · a definition of essential variables of ca-
during the design and planning of a capacity- pacity and performance
building or performance improvement inter- · hypotheses on important links between
vention. Evaluators and program planners these capacity and performance variables
should work together with key stakeholders to · identification of the stages of capacity
conduct a needs assessment, define the inter- · indicators, and methods
vention strategy, and construct an M&E plan. · a timeframe, and
Since capacity building is often one strategy
· a dissemination strategy
in a broader approach to improving perform-
ance, capacity-building M&E should fit into
the overall performance-monitoring plan.

Box 4: Six Steps for Developing a Capacity-Building M&E Plan

1. Define the purpose of the evaluation


2. Define performance objectives
3. Map capacity: Build a conceptual framework for the specific capacity-building
intervention
4. Identify capacity indicators
5. Identify appropriate methodological approach and sources of data
6. Develop an implementation and dissemination plan

Monitoring and Evaluating Capacity-Building Interventions 27


STEP 1 Define the Purpose of poses and to meet the needs of many different
the Evaluation stakeholders. It is advisable to specify the
primary and secondary users at the outset of
There are different types of evaluation, each planning to avoid confusion and aggravation.
with a different purpose. In designing an In the NGO Networks for Health Project, the
evaluation strategy, the evaluator first needs project partners and the donor expected to use
to identify the key question(s) that he/she capacity-building monitoring data in different
wishes to answer and thus the type of ways. The NGOs sought information to
monitoring or evaluation to conduct. Table monitor the results of detailed internal organ-
3 illustrates some of the research questions izational capacity-building plans. The funding
addressed by different types of capacity- agency desired information on more general
building M&E. capacity changes related to the quantity and
focus of programming in order to demonstrate
A second question to address at the outset of the overall results of the project. Until the
planning is: who are the intended users of main purpose of collecting data was specified,
evaluation results? M&E of capacity-building it was impossible to define the methods or
interventions can be used for different pur- indicators in the M&E plan.

Table 3: Questions Posed by Different Types of Capacity-Building M&E


Type of Evaluation Key Questions Answered
Needs assessment What is the current level of capacity?
Where are the gaps in performance and capacity?
What capacity is needed?
How can the intervention best address the gaps in capacity and per-
formance?
Monitoring Inputs: Are inputs available to the program in appropriate quantities
and at appropriate frequency? Did the type or quantity of inputs
change?

Processes: Are key processes carried out to an acceptable standard or


at an acceptable frequency? Did the processes change?

Outputs: Are products related to capacity available? Did the products


expected emerge or change?

Outcomes: Is capacity appropriate and adequate? Did capacity im-


prove?

Performance: Is performance appropriate and adequate? Did per-


formance improve?
Evaluation Did the capacity-building intervention lead to changes in capacity
and/or performance?

28
In practice, one finds an inherent tension in fully to ensure the best possible outcome. Too
defining the purpose of capacity-building much attention to serving external (often do-
M&E. Managers generally use capacity- nor) needs has been found to dilute the use of
building M&E results for two main reasons. M&E for improving capacity-building strate-
The first is primarily an internal function, that gies and organizational learning (Horton,
is, improving capacity and capacity-building 2001; Morgan, 1997). Lack of attention to
strategies. The second is primarily an external valid measures of change (or relying too
function, that is, reporting on the progress of a much on self-reported perceptions of capac-
capacity-building intervention to various fun- ity) can undermine the credibility of evalua-
ders and other external stakeholders. While tion results. Box 5 summarizes key advice on
the two purposes are not mutually exclusive, constructing a capacity-building M&E plan.
managers must guide the M&E process care-

Box 5:DO'S AND DON’TS of Developing an M&E Plan for a Capacity-Building


Intervention

DO
· Develop capacity-building M&E plan during the intervention design phase

· Develop capacity-building M&E plan with respect to broader performance objectives

· Involve all stakeholders, both internal and external, in developing the M&E plan, par-
ticularly the purpose of the evaluation

· Be prepared to negotiate with stakeholders on the purpose of the evaluation and make
all expectations transparent

DON’T
· Base M&E plans only on the needs of external stakeholders (mostly donors) at the ex-
pense of meeting internal information needs

· Miss opportunities to reflect and learn about capacity development through M&E

Monitoring and Evaluating Capacity-Building Interventions 29


STEP 2 Define Performance set of performance indicators for internal
Objectives monitoring or reporting to external
stakeholders. Thus, there may already be
Before launching into monitoring and evalua- clearly stated performance standards. If, how-
tion of any capacity-building program or in- ever, M&E planning takes place as part of the
tervention it is critical to step back and fully design process (starting with needs assess-
understand its focus and strategy. It is par- ment and intervention design) then focused
ticularly crucial to understand how the stated discussion among program planners, manag-
capacity-building strategy is expected to im- ers, and evaluators about what would consti-
prove performance and what signs of im- tute adequate performance in this context will
proved effectiveness are expected from ca- be needed.
pacity building. Although it is not possible to
prove causality, it is important to clearly de- In practice, perceptions of performance can
fine the expected pathways between capacity vary widely among stakeholders. For exam-
building and performance. ple, a manager of a clinic may define per-
formance in terms of benefits to the clients;
To begin, evaluators should address the fol- whereas the clinic’s financial managers might
lowing questions: define performance as the acquisition of new
clients (and a correlating increase in income).
· What is the purpose of the capacity- There is a growing body of literature about
building intervention? Performance Improvement in the health sec-
· What type of performance is expected tor, particularly organizational performance6
in a given period and at what level: that can be useful for defining performance
health system, organization, health expectations and identifying gaps in perform-
personnel, or community? ance and possible reasons for those gaps. Per-
· What processes or activities are being formance objectives should be expressed as
used to build capacity? variables or indicators that can be measured
· What external influences should be against international or national standards, or
taken into consideration? locally determined expectations. Normally,
· Who has a stake in capacity building the definition of performance objectives re-
and capacity measurement? flects both external and internal criteria. See
Box 6 for characteristics of a good perform-
Defining Performance ance objective and two examples of perform-
Performance objectives should relate to the ance objectives that will be used to illustrate
mandate or specific purpose of a system, or- capacity mapping in Step 3.
ganization, or community, or to health per-
sonnel functions. The more specific one can
be about performance expectations, the easier
it will be to construct a capacity map. If the
M&E plan is being developed after a capac-
ity-building intervention has been designed,
then articulating the performance focus and 6
See Lusthaus, C., M. Adrien, G. Anderson, and F.
expectations should not be difficult (assuming Carden. 1999. Enhancing Organizational Perform-
the design document is sufficiently explicit ance: A Toolbox for Self-Assessment, Ottawa: IDRC;
about performance objectives). Moreover, http://www.pihealthcare.org; McCaffrey, J., M. Luoma,
some organizations already may adhere to a C. Newman et al. 2000. Performance Improvement:
Stages, Steps and Tools, Chapel Hill, NC: INTRAH.

30
Box 6: Characteristics of a Good Performance Objective

· Measurable
· Reflects a needed change
· Relates to a clear product or action
· Relates to a defined target population
· Performed by specific delivery agent (e.g., organization, community group, etc.)
· Relevant to a particular context/situation

Examples

· Consistent delivery of a package of family planning services by X organization to a defined


population (defined in terms of coverage, quality, and consistency)

· Improved demand for immunization services in communities served by community health


workers (CHW) (defined in terms of utilization and coverage)

Monitoring and Evaluating Capacity-Building Interventions 31


STEP 3 Mapping Capacity: framework that links capacity-related inputs,
Build a Conceptual processes, outputs, and outcomes to perform-
ance of a system, organization, health per-
Framework for a
sonnel, or community. The advantage to the
Specific Capacity- evaluator of developing a capacity map is
Building Intervention twofold. First, through mapping, the evaluator
Once performance objectives and expecta- gains a better understanding of how key deci-
tions are defined, planners and evaluators sion-makers and stakeholders believe the
must make assumptions about the capacity system, organization, health personnel, or
required to meet these objectives. Capacity community should be working. Second, map-
mapping is a structured process of thinking ping enables evaluators to define exactly
through the role capacity plays in ensuring which capacity variables are to be evaluated
performance by developing a conceptual over time.
framework that is specific to a particular ca-
pacity-building intervention. During capacity
mapping, all the possible factors of capacity When to Map Capacity
that influence performance and the relation- As noted above, an M&E plan should be for-
ships between them must be identified. Once mulated during the design and planning of an
the factors are all laid out, the program staff intervention. If program planning and M&E
or evaluator can focus on those that are most design are conducted simultaneously, capacity
essential for the evaluation. mapping can contribute to the choice of inter-
vention strategies and to the M&E strategy.
Mapping capacity can be a critical step in However, sometimes circumstances do not
developing an M&E plan. The map is a tool permit this ideal type of coordination on
that guides the design of the plan, from selec- program and M&E design. Frequently,
tion of indicators and methods to presentation evaluation designers are brought in well after
of evaluation results. As stated by Morgan program planners have defined the interven-
(1997), evaluation designers and their pro- tion strategy and specific activities. In this
gram partners need “a sense of what capaci- case, evaluators must still work with program
ties they need to develop and for what reason. planners to understand the intervention strat-
Most groups and organizations can articulate egy and the role of evaluating it. Capacity
such a vision of the future given sufficient maps should reflect and/or inform this overall
time and productive discussion.” Mapping strategy. If a conceptual framework already
capacity makes plain to all stakeholders as-
sumptions about key variables that affect the
desired outcome of a capacity-building inter- Box 7:The Process of Capacity
vention. A mapping exercise is an excellent Mapping
way to bring all stakeholders to a common 1. Identify primary level of capacity
understanding of the scope and focus of a building
capacity-building intervention, the perform- 2. Define outcomes for that level
ance outcomes expected from capacity devel- 3. Develop a one-dimensional level
opment, and the role of M&E in tracking and capacity map
influencing change.
4. Develop a multi-dimensional level
capacity map
For the evaluator, the objective of this stage
of M&E planning is to create a conceptual

32
exists for the intervention, designers should els of performance.” Designers can refer to
review the assumptions and relationship guides on organizational capacity develop-
among variables depicted in this diagram to ment, for example, to help guide the choice of
understand the expected role of capacity capacity outcomes. However, capacity out-
building. If an overall conceptual framework comes should always be tailored to perform-
for the intervention does not already exist, it ance objectives or standards of the particular
is essential to construct one to support capac- intervention or organization under study.
ity mapping.
At the intervention design phase, it is worth
How to Map Capacity casting a wide net to consider all possible
The process of developing a capacity map is aspects of capacity that might relate to desired
outlined in Box 7. During this process, plan- performance. Brainstorming on capacity can
ners, evaluators, and key stakeholders might then lead stakeholders or participants in this
like to use the series of questions in Box 8 to mapping process to begin to prioritize areas
guide discussion. At a minimum, they should for capacity-building intervention. Where
consider the following two questions: parameters of an intervention are already set
or where a structure for brainstorming is
1. At which level is capacity required to en- needed, designers might choose two or three
sure the stated performance objectives? different areas of capacity development, ex-
In other words, what level is likely to be the press them as capacity outcomes, and then
main focus of capacity-building efforts? The map them. Although capacity building often
generic capacity map (Part 2, Figure 1) de- tries to address multiple capacity gaps simul-
fines four different levels where capacity is taneously, for measurement purposes, it is
needed in the health sector: system, organiza- advisable to choose a limited number of key
tion, health personnel, and individ- capacity outcomes for capacity mapping.
ual/community level. Careful definition of the
performance objectives in Step 2, and a clear For example, in Maps 1-3 below, the per-
understanding of the capacity-building strat- formance objective for the (fictitious) Family
egy should help evaluators answer these Health Organization is defined as “consistent
questions. For example, if performance gaps delivery of a package of essential, good-
are found in a specific health facility, then it quality family planning services to a defined
is likely that capacity-building interventions population.” Performance variables might
will seek to improve capacity outcomes at the include coverage, quality, and consistency,
organization or individual level. The first map which would be expressed as indicators. The
would focus on one of those levels. three key capacity outcomes for this specific
performance objective are defined as financial
2. What capacity outcomes are expected at self-reliance, quality assurance practices in-
that level to improve performance? stitutionalized, and health services able to
Once the level has been specified, designers respond to client needs. Although many other
should identify aspects of capacity that might aspects of capacity might influence coverage,
influence the specific performance objective quality and consistency in the delivery of
at that level and express them as capacity out- family planning services, this organization
comes. Morgan (1997) defines capacity out- has chosen to concentrate on these three ar-
comes as the “product of new learning and eas.
abilities that eventually become part of the
organization or system, and support new lev-

Monitoring and Evaluating Capacity-Building Interventions 33


Box 8: Questions to Guide Discussion for Capacity Mapping

Describing the link between capacity and performance


· What elements of capacity are needed to ensure performance?
· Where are the capacity gaps?
· What might be the cause of poor capacity?
· What are two or three key aspects of capacity required for performance?
· At what level is capacity required?

Identifying capacity variables


· What essential inputs and processes contribute to capacity at that level?

Describing the process of capacity development


· Could capacity develop in stages?
· How would one define possible stages of capacity?
· What benchmarks might be used to mark these stages?
· How would stages of capacity development manifest themselves in terms of improved
performance?

Single-Level Capacity Mapping capacity outcomes and new levels of per-


Once the two questions about levels and out- formance.
comes have been answered, it is necessary to
draw up a table or matrix that maps each ca- Once completed, the map illustrates concep-
pacity outcome at a single level. The process tually the pathway to achieving desired per-
involves identifying the variables that influ- formance results. It includes specific variables
ence the specific capacity outcome at that that may be targeted for intervention and then
level. Capacity variables include inputs, such monitored over the course of the intervention
as physical and human capital (defined by to understand changes in inputs and processes
Morgan, (1997) as “knowledge, infrastructure and any resulting improvements in capacity
and skills”) and processes representing outcomes. Evaluators are reminded that the
changes in human behavior (such as growth variables depicted in the capacity map are
of new skills, attitudes, values, and relation- those that relate to the inherent or desired
ships) that are reflected in the functions per- capacity of the system, organization, health
formed by individuals or groups. These inputs personnel, or individual/community targeted
and processes come together to produce im- for intervention. They do not represent ele-
proved capacity outputs and outcomes. It is ments of the capacity-building intervention
often expected, in the course of capacity de- itself.
velopment, that individuals or groups add to
or build on their existing assets to make posi-
tive changes with respect to managing those
assets. A capacity map tries to capture these
critical assets and behaviors and link them to

34
Box 9: Guidance on Capacity Mapping

· Capacity mapping should refer to the logic of the overall program, project or intervention. Hor-
ton et al. describe this approach as “referring to a theory of action” & that binds interested par-
ties into a single vision (Horton, 2001). Whether mapping capacity during intervention design or in
the context of an already defined intervention strategy, it is advisable to refer to existing data on
the intervention area, including needs assessment, capacity assessments, etc.

· When mapping capacity it may be helpful to refer to the conceptual framework in Part 2 for a
general review of the role capacity plays in improving performance in the health sector and exam-
ples of capacity variables.

· Be realistic about your expectations of the role of capacity. There is a tendency to consider
every aspect of resources and behavior in an individual, organization, or system as a capacity vari-
able, and to risk measuring too much.

· Look beyond individual capacity and training solutions to identify capacity variables. For exam-
ple, during discussions on the capacity framework with SAIDIA, a Kenyan NGO (nongovernmental
organization) that provides health services and community development opportunities, staff at first
claimed that training health workers and community members was their only work in capacity build-
ing. Yet, with further discussion, participants illustrated a wide range of capacity-building activities
at all levels, including their work in coordination and collaboration with the public sector, and court-
ing relations with donors that fund the NGO.

· Map capacity with a wide range of stakeholders to inspire a sense of ownership of capacity
building and appreciation of the use of evaluation in programming. Since capacity-building M&E
delves into many internal characteristics and processes found within systems, organizations, and
communities, it requires considerable investment on the part of the members of these groups to
achieve success. The quality of information obtained from evaluation, therefore, is directly af-
fected by the extent to which participants develop a feeling of ownership of the M&E activity and
value the data being collected.

To build such a capacity map, planners and The following three diagrams (Maps 1, 2, and
evaluators can use a facilitated discussion 3) provide examples of capacity maps that
among stakeholders as well as tap existing define in a very general sense some possible
data from needs assessments, capacity diag- inputs, processes, and outputs related to the
noses and prior monitoring. Evaluators might three particular organizational capacity out-
also draw on the experience of system and comes for the hypothetical Family Health
organizational theory, theories of adult learn- Organization: financial self-reliance, quality
ing, and community development to hypothe- assurance practices institutionalized, and
size the most likely causes of poor perform- health services able to respond to client needs
ance. Box 9 provides some general guidance and demands.
for capacity mapping.

Monitoring and Evaluating Capacity-Building Interventions 35


Multi-Level Capacity Mapping signers began by listing a large number of
The three single-dimension capacity maps possible capacity variables and then narrowed
provide a list of possible variables that influ- them down to the key variables to be moni-
ence capacity outcomes at one level. How- tored over the course of the intervention.
ever, it is equally important to consider the Shaded areas represent an explicit decision
connections among levels where capacity not to monitor an indicator in that category.
building might take place and their role with
respect to realizing capacity outcomes and Dealing with Context
performance objectives. Although perform- When assessing the effectiveness of capacity-
ance may be faltering at the facility, the strat- building interventions it is also critical to un-
egy used to improve performance may require derstand the environmental or contextual
additional capacity improvements at both the factors that influence capacity and perform-
health personnel and system levels. In this ance. Horton and colleagues (2000) describe
case, designers may choose to construct a context as “formal and informal rules of the
capacity map that includes several levels and game and how they are used.” As noted in
that will provide even greater detail on possi- Part 2, context can relate to the administra-
ble variables that contribute to capacity out- tive, legal, political, socio-cultural, economic,
comes. Thus, once the single-level map is and technical forces that shape capacity and
completed a second map is developed that performance. Clearly, many of these forces
includes more than one dimension to illustrate are well beyond the reach of a typical capac-
the interdependence among different levels of ity-building intervention. Nevertheless, it is
capacity and determine which factors at other advisable for program managers to track envi-
levels might influence capacity outcomes at ronmental changes periodically. Organiza-
the focus level. The two types of maps (sin- tional theory describes a successful (and sus-
gle-level and multiple-level) will be used to tainable) organization as one that understands
identify the variables to be assessed as part of its environment and is able to adapt to envi-
the M&E plan. ronmental changes to ensure its survival.
Thus, tracking changes in the operational
In Map 4, we have taken the same basic ma- context informs strategy for capacity devel-
trix but added a second axis to account for the opment, even if planners or managers feel
four possible levels of capacity. This example there is little they can do to change it. The
focuses on the community level but the map publication, Enhancing Organizational Per-
depicts variables at the four different levels formance, published by the International De-
that might influence the specific community- velopment Research Centre (IDRC) provides
level outcome.7 As noted in Map 4, the over- a useful list of questions related to environ-
all performance goal is to “improve demand mental influences on organizational capacity.
for immunization services at the community These questions are reproduced below in Box
level,” expressed as immunization service 10. In each map found in this guide there is
utilization and coverage. The capacity of an additional box at the bottom where key
Community Health Workers (CHW) to de- environmental variables are recorded.
liver IEC services was chosen as the capacity
outcome for mapping. In this case, the de-

7
This matrix is adapted from an exercise completed by
participants at a Workshop on Sustainability and Ca-
pacity Building hosted by PLAN International in May
2001 in Dakar, Senegal.

36
Map 1: Organizational Capacity Map - Single Level
Capacity outcome: Financial self-sufficiency

Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve financial self-reliance of health facilities in District One.
Strategies and activities: Improve leadership and financial planning skills of district managers; introduce new procedures for strategic planning; develop links
between health facilities and communities leading to joint planning and management; develop skills in grant application writing and reporting to funders.

Capacity Performance
Inputs Processes Outputs
Outcome Objective
Leadership Strategic & operational Strategic & operational plans Financial self- Consistent delivery of
planning developed and implemented reliance (ability to essential package of good-
Finances
generate resources quality family planning
Financial management Staff trained
Infrastructure & maintain a services to a defined
Resource mobilization Functioning financial healthy funding population (coverage,
Human resources
management system base) quality, and consistency)
Human resource management
Finance policy
& development External linkages established
Organizational culture (to donors, partners,
Research, monitoring &
individuals, community)
evaluation
Coordination with other
internal units
Creation & maintenance of
linkages with external groups
(specifically, funders)
Advocacy
Managing quality of care
Community mobilization
Context or operational environment
National policy on fee-for-service
National financial management procedures

Monitoring and Evaluating Capacity-Building Interventions 37


Map 2: Organizational Capacity Map - Single Level
Capacity outcome: Quality assurance practices institutionalized
Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve quality assurance practices in health facilities in District One.
Strategies and activities: Improve leadership of facility managers and supervisors; introduce norms and procedures, clarify job descriptions and expectations;
improve links to supplies and logistics unit.

Capacity Capacity Capacity Capacity Performance


Inputs Processes Outputs Outcome Objective
Leadership Operational planning Operational plans Quality assurance Consistent delivery of essential
developed and practices package of good-quality family
Financial resources Human resource
implemented institutionalized planning services to a defined
management &
Infrastructure population (coverage, quality,
development Staff, managers &
Human resources and consistency)
supervisors trained
Incentive practices
Technology Quality assurance
Training and supervision
Organizational culture standards clearly stated &
Research, monitoring & reference material
evaluation available
Logistics/supplies Staff expectations clear to
management them
Creation & maintenance Monitoring reports on
of linkages with other quality, utilization, &
organizations client satisfaction
(specifically, managers
Functional relationships
and suppliers)
between facilities and
suppliers

Context or operational environment


Published norms and standards for care
National health information system use of data to assess quality
Central stores policies and procedures

38
Map 3: Organizational Capacity Map - Single Level
Capacity outcome: Health services able to respond to client needs and demands
Intervention
Performance Objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality and consistency).
Capacity-building objective: Improve the ability of the health services to respond to client needs in District One.
Strategies and activities: Introduce incentives for quality of care practices; improve client provider communication skills; research and design optimal mecha-
nisms for communication and interaction between communities and health facilities.

Capacity Capacity Capacity Capacity Performance


Inputs Process Outputs Outcome Objective
Leadership Human resource Staff trained in technical Health services able Consistent delivery of essential
management & & communication skills to respond to client package of good-quality family
Finances
orientation needs and demands planning services to a defined
Functional community
Infrastructure population (coverage, quality, and
Organizational incentive outreach &
Human resources consistency)
practices communication
History of health service mechanisms
M&E, research
organization Feedback from routine
Coordination and
Organizational culture client satisfaction &
communication with
community monitoring
referral units
Quality of referral service
Creation & maintenance
monitored
of linkages with
community groups
IEC
Community mobilization
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care

Monitoring and Evaluating Capacity-Building Interventions 39


Map 4: Community Capacity Map on Multiple Levels
Capacity outcome: Effective delivery of IEC services
Intervention
Performance objective: Increase demand for childhood immunization in Sierra Leone.
Capacity-building objective: Improve capacity of CHWs working with local NGO to provide IEC on childhood immunization.
Strategies and activities: Develop curricula for training of trainers and training of CHWs; conduct training of trainers and supervision; health personnel support
CHWs from health centers; NGO supervises and supports health center personnel working in service delivery.

Level Capacity Capacity Capacity Capacity Performance


Input Processes Outputs Outcomes
National policy on
System immunization and
community-based workers

Organizational Health center personnel Designing & planning Training plan developed Successful organization &
(Local NGO) (quantity/basic training) a training program execution of training of trainers
Training materials developed
Community health worker Supervision and Ability to recognize training
(quantity) mentoring of CHWs needs and meeting them

Personnel Curricula for: Participation in Trainers meet standards Capacity of CHWs to deliver IEC Effective delivery of
Training of Trainers & for Training of Trainers following course on immunization: IEC services (Quality
Community Health of IEC sessions)
Workers Participation in CHW CHWs meet standards - CHWs skilled & motivated to
training on IEC following course provide services

IEC session provided

Community Exposure to immunization Community meetings Level of participation in Community knowledge of Improved demand for
program with CHWs health care learning activities immunization benefits and side immunization in
effects communities served
Recognition of need for by CHWs
immunization Caregivers value immunization (coverage)
Context or operational environment
National economic growth
National health expenditures on immunization
Donor support for immunization

40
Interpreting and Using Capacity Maps and performance. It should present capacity
The examples of capacity maps above illus- variables in a general way. Planners and
trate how the different factors of capacity evaluators then can discuss these variables
work together to drive or influence perform- and narrow them down to priority areas of
ance. They enable designers to view these intervention or measurement, and describe
elements in a more systematic way that pro- them more specifically. The second or third
motes common understanding and evaluation. iteration of a map should be more precise in
When capacity mapping is conducted after an depicting the variables to be monitored over
intervention has been planned, it can be used the course of the intervention. Map 5a pro-
to help evaluators understand the intentions of vides an example of the first iteration of mul-
managers in terms of their strategy for capac- tiple-level capacity mapping. It contains a
ity development. During mapping, managers wide range of general categories. Map 5b
are encouraged to pinpoint and define clearly illustrates the second iteration in which vari-
the areas of potential change that will serve as ables are specified in greater detail.
indicators of progress in capacity develop-
ment. Used after the design phase, the map- Through mapping, evaluators can identify and
ping exercise can reinforce existing capacity- organize the key questions to be addressed
development strategies, thereby increasing regarding expected changes in the quantity,
their specificity. Sometimes mapping can also quality, cost, and other key aspects of capac-
prompt planners to reexamine strategic ity which require monitoring over time. As
choices and change their tactics. Indeed, this planners and evaluators interpret the map,
use of capacity mapping for strategic plan- they will narrow down the focus of monitor-
ning, and the linking of M&E with program ing and evaluation activities. In Step 4, be-
strategy should be encouraged throughout the low, evaluators define indicators that measure
course of the capacity develop- these variables and build them into a moni-
ment/performance improvement intervention. toring and evaluation plan.

Each type of mapping (single-level or multi-


ple-level) can be done in two or three itera-
tions. The first iteration of a map should at-
tempt to provide a full list of capacity vari-
ables that may influence capacity outcomes

Monitoring and Evaluating Capacity-Building Interventions 41


Box 10: Questions to Guide Discussion on the External Environment and Its
Influence on Organizational Capacity

Administrative
· Is your organization influenced by the rule of other organizations, institutions, and groups to
which it is related or might be expected to be related?
· Is your organization influenced by expectations of consumers, policymakers, suppliers, com-
petitors, and other organizations in its external environment?
· Are your organization’s objectives and activities influenced by governments, donors, and
other organizations?
· Is your organization influenced by important sector rules and regulations?
· Do administrative norms/values in your country support or hinder the work your organization
intends to carry out?
Legal
· Do the laws of the country support the role played by your organization?
· Does the legal framework support the organization’s autonomy?
· Is the legal framework clear?
· Is the legal framework consistent with current practice?
· Is the legal regulatory context conducive to your organization’s work?
· Does your organization monitor changes in the legal context that could affect the position of
the organization?
Political environment issues
· Do the political and ideological trends of the government support the kind of work the or-
ganization does?
· Does the government system facilitate collaborative arrangements?
· Does the organization play a role in national or sector development?
· Does the organization have access to government funding?
· Does the organization have access to international funding?
· Does the organization have access to the government’s knowledge and publications?
· Do government policies and programs support the organization?
Sociocultural environment
· Is equity in the workplace a social value?
· Does the organization account for the effect of culture on program complexity?
· Do values found in the sociocultural environment support the work of the organization?
· Does the organization have access to a pool of capable human resources to recruit staff?
· Does the organization analyze and link demographic trends to its work?
Economic environment
· Does the government’s economic policy support the organization’s ability to acquire technolo-
gies and financial resources?
· Is money available to do the organization’s work?
· Do donors support the organization?
Technological environment
· Is adequate physical infrastructure (telecommunication, transport) in place to support the
organization’s work?
· Is the technology needed for your work supported by the overall level of national technology
development?

42
· Does the government system facilitate the organization’s process for acquiring needed tech-
nology?
· Is the level of human resource development in your organization adequate to support new
technology?
Stakeholder environment
· Is the community involved in the organization?
· Are partners involved in the organization?
· Do governments value the organization’s products and services?
· Do governments request or use the organization’s products and services?
· Do similar organizations compete or cooperate with your organization?
· Do donors influence the organization?
· Do funders support the organization?

The questions above are adapted from Enhancing Organizational Performance (Lusthaus et al.,
1999). While they are focused on the organization level, many of them can be adapted for any
level of the health system.

Monitoring and Evaluating Capacity-Building Interventions 43


STEP 4 Identify Capacity ables evaluators to understand the process of
Indicators capacity development over time and its rela-
tionship to capacity-building intervention.
The next step in developing an M&E plan for
capacity building is to define indicators for There is no agreed upon menu of “standard”
the elements of capacity identified during indicators of capacity development. As
capacity mapping. & Indicators are specific Morgan (1997) states, “It is difficult to find
variables that describe a given situation and useful examples of indicators that have been
can be used to measure inputs, processes, used effectively to measure or assess capacity
outputs, and outcomes at any level (system, building.” Examples of common health sec-
organization, health personnel, or individ- tor-related indicators are found in the
ual/community). They can be constructed MEASURE Evaluation Compendium of Indi-
from qualitative or quantitative data according cators for Evaluating Reproductive Health
to the type of variable one is interested in Programs (Bertrand and Escudero, 2002) and
tracking. For example, the indicator “number other indicator handbooks. However, no sin-
of personnel per health facility trained in gle indicator manual focuses exclusively on
control of sexually transmitted infections capacity building or differentiates between
(STI)” tracks the inputs that influence capac- capacity and performance measures. The ob-
ity of a public health system. Alternatively, vious consequence is the need to work care-
measures of provider knowledge of appropri- fully and systematically during M&E plan-
ate treatment for different sexually transmit- ning to develop indicators that accurately re-
ted infections and the availability of key STI flect capacity development in each particular
pharmaceuticals at each facility are outcome context. Some capacity indicators can be
indicators signaling capacity in service deliv- drawn from experience in human performance
ery. All three of these indicators could be improvement, organizational assessment and
tracked to determine whether capacity exists theory, and other disciplines. Others will re-
to meet system-level performance objectives, quire testing through practice. When the
such as “quality of STI care.” PRIME project developed an index of capac-
ity in training organizations, it built on years
What Are Capacity Indicators? of experience working in this area and the
Capacity indicators generally project an aspi- collective understanding of what it takes to
ration or a sought-after state or ability. They provide good-quality training on a sustainable
capture the current “stock of resources avail- basis (Pyle and LaFond, 2001).
able” for various uses or an individual or or-
ganizational behavior that puts those re- Even with the benefit of a generic indicator
sources into action (Moore et al., 2001). De- reference material, most indicators used in
fining or choosing indicators for M&E en- capacity-building M&E require some molding
courages planners and evaluators to be precise or adaptation to a particular situation. For
about the inputs and processes that influence example, if evaluators would like to study the
capacity and performance and what types of progressive stages of capacity development in
changes might result from capacity-building a specific organization, they might choose
interventions. Well-defined indicators provide indicators based on defined scales of organ-
a reference framework for guiding all izational development, as in the Management
stakeholders toward the same goals. Indica- and Organizational Sustainability Tool
tors also allow for standardized measurement (MOST) developed by Management Sciences
of change during implementation, which en- for Health (MSH, 1996). However, they

44
should also adapt these indicators to a par- Evaluators also need to take into account the
ticular organization’s baseline assessment of availability of data for “operationalizing”
capacity and its particular product or service. indicators and the potential costs of gathering
Expectations for improved performance and data, in terms of financial resources and time.
the timeframe of a specific capacity-building
intervention also matter. An organization pur- Table 4 provides examples of health-sector
suing capacity improvement in reproductive capacity indicators by level (system, organi-
health service delivery would choose different zation, health personnel, and individ-
measures of change from one seeking capac- ual/community) and measurement variable
ity improvement in networking and partner- (input, process, output, and outcome) taken
ing. Thus, at the outset of M&E planning, one from various sources (Morgan, 1997; Horton
should begin defining indicators based on the et al, 2000; Bertrand and Escudero, 2002;
capacity variables identified in mapping Brown, LaFond, and Macintyre, 2001). It
rather than selecting indicators from a generic suggests wide variation in the indicators cur-
list. Map 6 illustrates how indicators can be rently used to measure capacity and the need
added for each capacity variable, using the for both quantitative and qualitative data
format from Map 3. The discussion on indi- sources. The table is not intended to represent
cators below begins with general guidance on relationships among these specific indicators.
indicator design, provides examples of capac- Box 11 provides examples of capacity indi-
ity indicators, and concludes with lessons cators used in non-health sector programs.
learned from a variety of capacity develop- Table 5 gives examples of performance indi-
ment experiences (in health and other sec- cators at each level for reference.
tors).
Lessons for Indicator Development
Working with Capacity Indicators The following lessons on indicator develop-
By now most program managers and evaluat- ment are drawn from field experience in ca-
ors at least have heard about what makes a pacity measurement in health and other sec-
good indicator. In general, all indicators tors (Morgan, 1997; Horton et al. 2000; Fort,
should share the following traits: 1999; Luoma, 2000; Ampomah, 2000; Ca-
· Validity: Validity refers to whether the totti, 1999; Pyle and LaFond, 2001).
indicator is measuring what it is supposed to
measure. Indicators should have a close con-
nection with the intervention. Lesson 1: Indicators should reflect an un-
· Reliability: Reliability refers to the de- derstanding of the change strategy for ca-
gree of random measurement error in an indi- pacity development.
cator. Error may result from sampling or non-
sampling; whether the response is inherently The process of choosing capacity indicators
objective or subjective. should feed into the overall change strategy
· Well-defined: Indicator definitions designed for building capacity and improving
should use clear and precise terms so every- performance. Indicators should be developed
one involved can understand what is being alongside capacity mapping while designing a
measured. capacity-building intervention. Evaluators
· Sensitivity: A sound indicator is sensitive also might seek to understand how informa-
to the changes in program elements being tion is currently used in the organization or
assessed. system to ensure that indicators become in-
centives for change and not barriers.

Monitoring and Evaluating Capacity-Building Interventions 45


Box 11: Examples of Capacity Indicators from Non-health Sector Capacity-
Building Interventions

Example 1
1. Capacity indicator related to decentralized payment functions administered by local officials,
district assembly members, and financial and political employees:
Ability of the system to transfer funds between authority levels (for example, within 45 days
of the end of the quarter) and/or produce audited statements within six months of the end of
the fiscal year.

2. Capacity indicator related to community water management committee’s role in water pump
maintenance:
A functioning Pump Management Committee that meets at least once a month and keeps the
pump functioning 90 percent of the time in normal circumstances.

3. Capacity indicator related to coordination of information among six ministries working on soil
erosion:
Twenty-five percent increase in the number of projects that require contributions from two or
more departments.

4. Capacity indicator related to government department to carry out joint surveys of client
farmers in delta area of cotton region:
Acceptance of survey methods as an effective tool by senior research officers and their incor-
poration into the work program of the agencies.

Source: Morgan, 1997

Example 2
Indicators related to motivation
Motivation to implement the strategic approach
Motivation to undertake strategic planning
Interest in improving the management information system
Interest in designing and managing competitive projects
Indicators related to capacity
Knowledge of the strategic approach
Skills to undertake strategic planning
Knowledge about designing and managing competitive projects
Knowledge about the foundations of an information management system
Indicators related to context or environment
Degree to which tasks demand conceptual and methodological creativity and innovation
Positive appreciation of performance in institutional evaluations
Degree of autonomy to undertake work
Contribution to improvement of the management information system

Source: Horton et al, 2000

46
Lesson 2: Capacity indicators should cap- Lesson 4: Indicators should encourage
ture organizational and behavioral change ownership and appreciation of the capac-
as well as material and technical change. ity-building and M&E process.

The most challenging demand of capacity Indicators should be designed to promote


measurement is constructing meaningful ownership of the capacity-building process.
measures of human and organizational be- Evaluators should work with capacity-
havior change. There is a tendency, particu- building stakeholders to define indicators that
larly in the health sector, to advance technical reflect locally determined and accepted no-
explanations for what are just as likely to be tions of change. Keeping indicator definitions
organizational or human behavioral problems. simple and relevant to local needs will en-
For instance, it is often presumed that training courage widespread use of M&E for capacity
health providers alone will address perform- development. Designing indicators to serve
ance gaps in service delivery when the root external (often donor) needs rather than local
causes of poor performance can range from decision making can adversely influence
unreliable sources of supplies to low health ownership of capacity development (Morgan
worker motivation. Capacity developers and 1997). This type of approach can “diminish
evaluators need to have a sense of how people the contribution that capacity indicators can
and organizations change, what brings about make to project effectiveness.” Evaluators are
lasting change, and why change in certain advised to balance the desire for more infor-
values and practices makes a difference. Ca- mation for accountability purposes with the
pacity indicators should capture the essence value of using information to motivate posi-
of these changes in human and organizational tive behavior changes in individuals and or-
behavior. ganizations.

Lesson 3: In planning capacity-building Evaluators should also keep in mind that


M&E, it is important to monitor not only measuring capacity can also be a sensitive
capacity but also key aspects of perform- issue. Organizations, and people, do not relish
ance and the environment. having their “weaknesses” documented. They
feel even less enthusiastic about having their
Improved performance serves as the main weaknesses broadcast to their superiors, part-
reference for mapping capacity and is the goal ners, and funders. The quality of data gath-
of capacity building. Evaluators should re- ered for constructing capacity indicators could
view changes in performance alongside ca- be distorted and/or obstructed unless the pur-
pacity to examine the relationships among pose of monitoring and evaluation is clear to
different capacity and performance variables. all stakeholders, including the usefulness of
In addition, evaluators should track environ- certain indicators. Indicators should be as
mental changes. Environmental factors typi- non-threatening as possible.
cally help to explain changes (or lack of
change) in capacity and performance. Indi-
cators that monitor external conditions serve
as a warning to organizations that capacity
and performance may be in jeopardy.

Monitoring and Evaluating Capacity-Building Interventions 47


Map 5a: Mapping Capacity First Iteration
Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve ability of health services to respond to client needs and demands in health facilities in District One.
Strategies and activities: Introduce incentives for quality of care practices; improve client provider communication skills; research and design optimal mecha-
nisms for communication and interaction between communities and health facilities.

Inputs Processes Outputs Outcomes Performance


System Civil service administration
practices

Supplies & delivery of essential


goods
Organization Leadership Supervisors Quality of referral Health services able Consistent delivery of
system to respond to client essential package of
Human resource Incentives needs and demands good quality family
Feedback planning services to a
Supplies Referral defined population (cov-
Supplies management erage, quality, and con-
sistency)
Personnel Number of staff Outreach

Learning

Provider-client interaction
Community Experience with family planning Links to community Number of contacts Outcome of contacts

Local health organizations

Leadership
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care

48
Map 5b: Mapping Capacity Second Iteration
Capacity outcome: Health services able to respond to client needs and demands
Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve ability of health services to respond to client needs and demands in health facilities in District One.
Strategies and activities: Introduce incentives for quality of care practices; improve client provider communication skills; research and design optimal mecha-
nisms for communication and interaction between communities and health facilities.
Inputs Processes Outputs Outcomes Performance
Civil service administration prac-
System tices that support counseling and
provision of family planning

Supplies & delivery of essential


goods family planning supplies
Leadership within management Behavior of supervisors (content, Number of commodity reports Health services able to Consistent delivery of
Organization teams with knowledge and training communication & modeling of desired respond to client needs essential package of
in family planning behavior among health workers) Worker feedback on supervi- and demand: good-quality family
sion (Expressed as: Utiliza- planning services to a
Human resource (quantity & quality Incentives for supervisors & providers to tion; Client satisfaction; defined population (cov-
of existing training/skills) perform adequately Client feedback on services and Supplies availability/ erage, quality, and con-
stockouts) sistency)
Supplies of family planning and IEC Referral system (designating, enabling & Supplies management check-
materials (quantity & reliability) following up referrals) list used

Frequency of needed referral


Number of staff in each professional Community outreach activity Number of outreach visits Health workers moti-
Personnel category related to family planning (frequency and quality) vated to address client
needs
Availability & use of learning opportuni-
ties for improving communication on Health workers’ ability
family planning to conduct client inter-
view
Provider-client interaction index (quality)
Experience with family planning Mechanisms for linking health services & Number of contacts with Outcome of contacts in
Community community groups (frequency & quality) health facilities terms of client satisfac-
Local organizations/unit focused on tion
health

Leadership
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care

Monitoring and Evaluating Capacity-Building Interventions 49


Map 6: Community Capacity Map on Multiple Levels with Indicators (in Italics)
Intervention
Performance objective: To increase demand for childhood immunization in Sierra Leone.
Capacity-building objective: Work with a local NGO to improve Community Health Workers (CHW) capacity to provide Information, Education, &
Communication (IEC) on childhood immunization.
Strategies and activities: Develop curricula for training of trainers and training of CHWs; conduct training of trainers and supervision; health person-
nel support CHWs from health centers; NGO supervises and supports health center personnel working in service delivery.

Level Capacity Inputs Capacity Processes Capacity Outputs Capacity Outcomes Performance
National policy on
System immunization & CHWs
(Policy exists & is favorable)

Organization Health center personnel Designing & planning a Training plan developed Successful organization &
(Local NGO) (Quantity/ basic training) training program (Plan exists) execution of Training of
(Planning mechanisms Training materials developed Trainers (TOT completed;
Community health workers exist & planning skills (Quantity/quality of materials) trainees’ knowledge improves;
(Quantity/ basic training) demonstrated) trainees satisfied)

Ability to recognize training


needs and meet them
(assessment process leads to
training)

Personnel Curricula for: Participation in Training Trainers meet standards following course Capacity of CHWs to delivery Effective delivery of IEC
- Training of Trainers of Trainers (Post-test scores) IEC on immunization services (Quality of IEC
and sessions)
- Community Health Participation in CHW CHWs meet standards following course - CHWs motivated to provide
Workers training on IEC (Post-test scores) services (attitudes of CHWs to
(curriculum exists) (% of personnel or IEC)
CHWs completing IEC sessions provided
training) (Number/frequency of IEC sessions)

Community Exposure to immunization Perceptions of CHWs Community knowledge of Improved demand for
program (Past experience (Community relationship with CHWs and immunization benefits and side immunization services in
with childhood acceptability of their role) effects (Index of immunization communities serviced by
immunization) program message recall) CHWs (Immunization
service utilization &
coverage)

Context or operational environment


National economic growth (GDP)
National health expenditures on immunization (% of health budget spent on immunization; total expenditure on immunization)
Donor support for immunization (% of immunization expenditure from external sources)

50
Lesson 5: The results of indicator-based indicators based on program objectives and
capacity-building M&E should be inter- develop a manageable set to monitor over
preted wisely. time.

There are documented challenges to using Evaluators are experimenting with indices or
indicators to monitor and evaluate capacity complex indicators that combine a short list
building. Evaluators can manage each chal- of essential indicators (sometimes weighted
lenge with careful planning of M&E. Some of by strength of influence) into a single measure
these challenges are detailed below. of capacity. Of the few examples in the health
sector, the PRIME project used a single index
Capacity development is context specific. It to assess capacity dimensions of organizations
reflects qualitative (as well as quantitative) that conduct training in reproductive health
changes in resource availability and behavior. (Fort, 1999; Ampomah, 2000; Catotti, 1999).
Given the wide range of possible scenarios This index also takes into account different
and capacity/performance objectives, it is possible stages of capacity by using a scale
often not possible to establish objective stan- from 0 to 4 to assess progress of an organiza-
dards that would allow local or regional com- tion for each indicator under study. An exam-
parisons in capacity across similar types of ple of the indicators and scales used in the
entities. Internal benchmarks can be set, but Training Organizations Index and a presenta-
they may not be valid for other entities or tion of the results of a capacity assessment in
contexts. It follows that aggregation of indi- El Salvador are found in Annexes A and B.
cators on a district, regional, or national scale The PRIME Project did not use this index to
is not likely to result in useful information for conduct routine monitoring and evaluation of
M&E. training organizations; however, it has
adapted many of these indicators and the
Selection of capacity indicators is often scaling approach for use in its performance
highly subjective. To encourage ownership monitoring plan (PRIME II, 2001). Other
and relevance, evaluators often rely on per- examples that use scales or scoring as part of
ceptions of capacity and capacity change a capacity index can be found in the Man-
among participants in the capacity develop- agement and Organizational Sustainability
ment process as the basis for measuring prog- Tool (MOST) (MSH, 1996), and tools devel-
ress. Thus, there is a need to balance these oped to evaluate the capacity of agricultural
subjective measures with a range of objective research organizations (Horton et al., 2000).8
indicators and data-gathering strategies.

Capacity is influenced by many different


variables. Hence, there is a tendency to try to
monitor a number of indicators at the same
time. We encourage the use of multiple indi-
cators for each level within the capacity map
because they provide greater insights into the
state of capacity and can serve to validate
findings. Use of multiple indicators is often
recommended to explain what can be an im- 8
It is important to note that indices can be difficult to
precise situation or occurrence. At the same interpret if they are presented out of context or to an
time, however, evaluators should prioritize audience that does not understand how the index is
constructed.

Monitoring and Evaluating Capacity-Building Interventions 51


Table 4: Examples of Capacity Indicators in Current Use in Health Programs
Level Inputs Process Outputs Outcomes
Health system · Doctors per population · Donor coordination committee meets · Number of multisectoral meetings · Widely distributed sector-wide
· Ratio of health care spending every 6 months held strategy
on primary health care vs. · Collaborative “arrangements” exist · Number of collaborative projects · Regular auditing of system-wide
tertiary care between social sectors – e.g., initiated in sectors outside health accounts by independent company
· Percent of health budget funded meetings between health & · Existence of national standards for · Percent of recurrent costs covered
by external sources agriculture or health & education professional qualifications through local resource generation
· Percent of national budget · Percent of districts with decentralized · Existence of sector-wide strategy
allocated to health budgeting
Organization · Existence of clear mission · Coordination with other organizations · Presence of financial management · Supervisors able to guide on-site
statement evident through internal reporting system that regularly provides learning
· Number of trained managers mechanisms income/revenue data & cash flow · Ability to adjust services in
per unit · Number & quality of jointly analysis response to evaluation results or
· Percent of district medical administered activities with partner · Number of commodity tracking emergencies
officers with public health organizations reports · Cost-sharing revenue as a
degree/training · Job descriptions updated regularly to · Individual work plans are prepared proportion of the annual MOH
· Clearly defined organizational reflect real work requirements & for all staff non-wage recurrent budget
structure responsibilities · Sufficient number of sites · Percent of facilities with stock-out
· Organizational culture that · Team planning (frequency and functioning as clinical training sites of essential commodities in the last
values and rewards quality) to meet clinic practice needs 6 months
performance · Supervisors playing mentoring role · Percent of MIS reports complete and · Regular review of MIS data for
on time routine planning
Health · Adequacy of training · Number of training sessions to · Number of providers trained, by · Percent of trainees (providers) with
Personnel materials/supplies has been improve human resource management type of training & cadre of provider knowledge in skill area (meet
assessed in one or more addressing needs expressed by · Number of staff trained in finance, national standard)
institutions providers MIS, strategic planning, financial · Level of staff motivation
· Adequate training supplies · Managers trained in and using planning · Percentage of senior staff with
available in sufficient performance evaluation · Number of managers trained, by continuing education opportunities
quantities to support ongoing · Percent of courses where training type of training
RH/FP training in one or more methodology is appropriate for · Number of monthly staff newsletters
institutions transfer of skills/knowledge produced
· Up-to-date curricula · Professional networking (frequency
· Percent of training budget from and quality)
external assistance
Individual/ · Average level of education · Number of health committees who · Proportion of non-users who desire · Community needs presented to
Community attained in the district meet regularly and take action to use contraception in the future district health office on regular
· Mean income level · Percent of dispensary budget · Level of participation in community basis
· Proportion of adults whose supported with community-based health committees · Proportion who knows anemia
partner recently died in central funding · Number of health action plans prevention practices
hospital · Level of community cohesiveness · Level of community mobilization
· Community leadership (type · Community experience negotiating and empowerment
and quality) with district health office · Community support for
maintaining new well

52
Table 5: Examples of Performance Indicators in Current Use in Health Programs
Level
Health system · Average time/distance to the nearest reproductive health facility offering a specific service
· Percent of facilities where percent of clients receive the service that meets the expected standards
· Number/percent of trainees deployed to an appropriate service delivery point and job assignment
· Percent of facilities that experience a stockout at any point during a given time period
· Percent of health facilities providing STI services with adequate drug supply
· Contraceptive prevalence rate (CPR)
· Disability adjusted life years (DALY)
· Disability adjusted life expectancy (DALE)
· System responsiveness to clients
· Index of equality of child survival
· Total health expenditure as a percent of GDP
· Public expenditure on health as a percent of total public expenditure
· Out of pocket expenditure as a percent of total health expenditure
Organization · Percent of mothers examined every 30 minutes during the first two hours after delivery
· Percent of data elements reported accurately in MIS reports
· Family planning continuation rates in catchment population
· Percent of annual revenue generated from diverse sources
· Percent of target population that received DPT 3 immunization
· Cost of one month’s supply of contraceptives as a percent of monthly wages
Health Personnel · Percent of deliveries in which a partograph is correctly used
· Percent of newborns receiving immediate care according to MOH guidelines
· Percent of pregnant women counseled and tested for HIV
· Percent of STI patients appropriately diagnosed and treated
Individual/ · Percent of communities with active health center management committee
Community · Percent of target population that received DPT 3 immunization
· Percent of non-users who intend to adopt a certain practice in the future
· Percent of infants 0 - < 6 months of age who are exclusively breastfed
· Percent using condoms at last higher-risk sex

Monitoring and Evaluating Capacity-Building Interventions 53


Determining cause and effect are not easily In light of these challenges, the way in which
done with capacity-building M&E, even indicators are developed, measured and used
though a capacity map might clearly state becomes a critical determinant of the credi-
assumptions about relationships among vari- bility and usefulness of monitoring and
ables. The multiplicity of capacity variables evaluation of capacity building. Many of
and the frequent improvement and decline in these constraints can be addressed with care-
capacity make it difficult to draw definite ful indicator development and the use of a
conclusions from a complex situation. It is not range of data-collection instruments that are
surprising, therefore, that some evaluators sensitive to the intangible nature of what is
have found linear evaluation frameworks and being measured in capacity-building evalua-
the strict use of indicators too inflexible and tion. At the same time, the use of linear
mechanical to be used effectively in moni- evaluation frameworks also requires careful
toring and evaluating capacity (Morgan, management. Evaluators need to focus on
1997; Earl et al., 2001). For these and other critical process aspects of capacity building,
reasons, Morgan cautions evaluators not to and use maps to guide but not restrict M&E.
rely too heavily on indicators to provide com-
plete insights into capacity development. In
spite of the growing list of capacity measures,
“indicators used in monitoring and evaluation
of capacity do not explain why complex sys-
tems work the way they do” (Morgan, 1997).

54
STEP 5 Identify Appropriate ity-building intervention itself (e.g. contextual
Methodological influence). Since capacity measures are not
easily quantified, and identifying similar or-
Approach and Sources
ganizations or systems to facilitate compari-
of Data son (as in a case-control study) is difficult,
The fifth step in developing a capacity- experimental designs are not feasible or prac-
building M&E plan involves defining the tical for capacity measurement. As James
methodological approach, identifying sources (2001) notes about capacity-building evalua-
of data, and choosing (or developing) data tion, “precise measurement and attribution of
collection tools. Evaluators should ask the cause and effect is rarely possible and never
following questions: cost effective. The best we can hope for is
¨ Which methodological approach is appro- plausible association.”
priate?
¨ What sources of data are necessary for Evaluators are therefore advised to recognize
measuring the indicators defined in Step the challenges to capacity-building M&E and
4? set realistic aims for evaluation. Many of
¨ Are there any existing tools for measuring these challenges have been discussed previ-
capacity that are appropriate for my pur- ously in this guide. Some of them relate to the
poses? inherent nature of capacity (capacity and ca-
pacity building are dynamic and multidimen-
Methodological Approaches and Challenges sional; contextual), while others are a function
As discussed throughout this guide, monitor- of the early stage of development of capacity
ing and evaluation require different meth- measurement. Four of the main challenges are
odological approaches and have different data detailed below.
needs. The choice of methods and data
sources relates mainly to the purpose of the Capacity develops in stages
evaluation (see Step 1). Capacity measurement tools should be able to
¨ Is the purpose to monitor the implementa- capture different stages of development of
tion of a capacity-building intervention, communities, health personnel, organizations,
assess its effectiveness, or both? or health systems. The “MSH organizational
¨ Will the results be used mainly for inter- profile” used in the Management and Organ-
nal improvements or external reporting? izational Sustainability Tool (MOST), for
example, has identified different benchmarks
Clearly, all capacity-building programs need according to an organization’s stage of devel-
to be monitored to ensure they are working opment (nascent, emerging, mature). Capac-
well (i.e. to track changes in inputs, processes, ity measurement must be able to capture indi-
outputs and outcomes). However, the evalua- vidual elements of capacity and combinations
tion of program effectiveness happens less of elements, and relate them to the stage of
frequently and only for selected interventions development of the entity being assessed.
due to cost and complexity. In the case of
capacity-building evaluation, it can be par- Changes in capacity need to be measured
ticularly difficult to conduct evaluations that over time
look for an association between capacity- Repeated measures are needed to capture the
building intervention and changes in capacity interim steps in capacity-building processes as
or performance. These changes can occur for well as trends in outcomes. While there are
a number of reasons in addition to the capac- examples of repeated application of capacity

Monitoring and Evaluating Capacity-Building Interventions 55


measurement tools (INTRAH, SFPS, and Lag time between changes in capacity and
PASCA), to date, only limited reports of changes in performance
findings from longitudinal evaluations are It is very common to experience considerable
available (PASCA). Better techniques are lag time between a capacity-building inter-
needed to capture the effects of capacity vention and changes in capacity, as well as
building over time and elaborate the link be- between changes in capacity and changes in
tween capacity development and performance performance. Timing of capacity or perform-
improvement. ance measurement should take into consid-
eration these delays and consider interim
Internal versus external validity measures of change or longer timeframes for
Capacity building should be a self-motivated M&E.
and self-led process of change. Evaluation
strategies that use self-assessment techniques Tackling Methodological Challenges
and locally determined benchmarks of prog- Many of the tools and methods reviewed for
ress inspire ownership of capacity develop- this guide were able to tackle challenges to
ment and increase the likelihood that evalua- capacity-building measurement. Others pro-
tion results will be used. Nevertheless, there vided useful lessons on how to move capac-
can be a cost to this approach in terms of the ity-building M&E forward. Advice to evalu-
perceived validity of findings. External ators follows:
stakeholders often prefer to measure progress · use multiple data-collection instruments,
against performance standards (of either na- reflecting the multidimensional nature of
tional or local origin) using standardized indi- capacity. Multiple data-collection instru-
cators to allow comparisons or a reference to ments are useful to get a comprehensive
other similar types of capacity-building pro- picture of capacity or to assess capacity
grams. Self-reported measures of capacity from different perspectives (e.g., assessing
may not meet the reporting expectations of the views of managers and health workers
external stakeholders even if they support or assessing internal perspectives and
better capacity development strategies. Box those of external examiners).
12 describes the experience of one project in
using the two different approaches.

Box 12: PASCA: From Self-Assessment to External Assessment

PASCA is a USAID-funded project focusing on capacity building of nongovernmental organiza-


tions (NGOs) that provide HIV/AIDS services in Central America. During the first year of the
project (1996), PASCA conducted a self-administered needs assessment study among the NGOs
receiving support. Although the needs assessment provided useful information for planning, the
researchers felt that the self-administered methodology exaggerated the programmatic, ad-
ministrative and managerial capacity of the NGOs. Thus, managers decided to conduct an exter-
nally administered Validation Study in 1997 using mixed methods to determine the validity of
the self-reported data, and provide an in-depth assessment of the management and program-
matic needs of each NGO. When compared to the Needs Assessment survey, capacity scores
from the Validation Study were markedly different. The Validation Study, in which self-
reported answers were validated with document observation, provided data that more accu-
rately reflected the capacity of the NGOs (MEASURE Evaluation, 1998).

56
· combine qualitative and quantitative Sources of Data
methods, such as focus groups, individual A number of data sources are available for
interviews (with both closed- and open- monitoring and evaluating capacity building.
ended questions), surveys, and document Since capacity measurement often includes
reviews. the use of multiple indicators, monitoring and
evaluation usually requires multiple data
· address more than one level. Capacity sources. Indicator design should take into
often occurs at several levels simultane- account the potential availability of data par-
ously. New measurement tools are needed ticularly from existing sources. Organizations
to capture capacity building at a single and systems often have records and reports
level and address the relationship between that provide insights into different aspects of
levels. capacity. Some examples of existing data
sources are presented below.
· include self-assessment techniques in
combination with external or standardized In many cases, however, it will be necessary
methods. (See Box 13 for a discussion of to collect new data to operationalize the indi-
self-assessment and external assessment.) cators selected. As noted above, issues such
Evaluators are urged to strike a balance as data sensitivity (with respect to its effect on
between meeting the need for evaluation validity), the purpose of monitoring and
data that different stakeholders will deem evaluation, and the cost in terms of time and
“objective” or credible, and promoting resources required should guide evaluators in
performance improvement through moni- determining what data will be collected and
toring and evaluation. how they will be collected.

· triangulate methods and data sources. Sources of data by level of capacity in-
Triangulation examines results from a va- clude:
riety of data-collection instruments and
sources, strengthening the findings of ca- System: national health policy records, na-
pacity-building monitoring and evalua- tional data-collection efforts (census, vital
tion. If all data lead to the same conclu- statistics, national /regional surveys), interna-
sion, then there is some confidence the re- tional surveys (e.g., FPPE, API, DHS).9 MOH
sult actually will reflect changes. Where policies, financial reports, legal or regulatory
there is discordance in the results, it is statements (bills, acts, recommendations,
necessary to examine possible sources of white papers, etc.).
the differences. Looking at other sources
of data on similar topics can help under- Organization: routine health service records
stand findings as well. and reports, budget and expenditure records,
financial statements, personnel records, pro-
· use data interpretation workshops to ob- gram and donor reports, constitutional docu-
tain input from a range of stakeholders in- mentation, strategic and annual plans, meeting
volved in the program (both internal and minutes, evaluations and audits, organiza-
external). tional networking analysis, organizational
assessments.

9
FPPE (Family Planning Program Effort Score); API
(AIDS Program Effort Index); DHS (Demographic and
Health Survey).

Monitoring and Evaluating Capacity-Building Interventions 57


Health personnel: personnel records (job de- them assess the capacity of health program
scriptions, performance evaluations, back- personnel because of their central role in or-
ground checks, training summaries), supervi- ganizational functions and performance. We
sion reports, self-evaluations. identified a more limited number of tools to
measure the health system and individ-
Individual/Community: community-based and ual/community level capacity. However, the
social marketing surveys, community health field of capacity measurement is changing
worker reports, meeting minutes, maps, focus quickly and several agencies are currently
groups, and participatory appraisals. developing approaches to understanding
changes in performance at the system level
In planning for data collection, it is often (Partnerships for Health Reform, 1997;
helpful to develop a data chart that spells out Murray and Frenk, 1999).
the key questions to be addressed, the indica-
tor that links to the question, and the data The tools listed in Table 7 are provided for
sources needed to answer the question. An reference only. To determine if a tool might
example of a data chart is found in Table 6. be useful for a particular capacity develop-
ment intervention, evaluators should address
Tools for Measuring Capacity at Different the following questions:
Levels
A number of data-collection instruments and · At what level(s) do I want to assess ca-
tools have been developed and used to meas- pacity?
ure capacity at the four levels. (See Table 7 · Do any of the existing instruments meas-
for a list of tools and their key characteris- ure the dimensions or indicators I have
tics). In most cases, these tools have been identified through mapping?
used for capacity assessment rather than for · How could I adapt one of these instru-
monitoring and evaluation. In addition, most ments for my needs?
of the tools identified are designed to assess
organizational capacities, although many of

58
Box 13: Advantages and Disadvantages of Self-Assessment and
External Assessment Techniques

While practitioners value the role of self-assessment tools in stimulating interest in capacity
building and launching a change process, for monitoring and evaluation purposes it is important
to consider the potential advantages and disadvantages of both internal and external ap-
proaches.

Advantages of self-assessment tools:


· Greater involvement of those whose capacities are being assessed (e.g., staff of an or-
ganization), which can lead to greater ownership of the results and, ultimately, greater
likelihood that capacity improvements (based on results of the assessment) will take
place
· Non-threatening way to raise awareness of the importance of capacity improvement
among those involved in the assessment process

But self-assessment tools


· Require an external facilitator
· Rely on perceptions and may be less reliable when used repeatedly and are prone to vari-
ous biases (e.g., optimistic bias)
· Become less useful with high staff turn-over (which results in changing the ‘self’ in ‘self-
assessment)
· In many cases are interventions in and of themselves

Advantages of external assessments tools:


· Often considered more objective

But external assessment tools


· May be more costly due to the cost of external consultants; self-assessments, particu-
larly those that require intensive facilitation, can also be demanding in terms of time
· May not reflect internal views accurately

Recommendation:
· Use a mixture of methods that combine subjective and objective measurement.

Monitoring and Evaluating Capacity-Building Interventions 59


Table 6: Example of a Table of Data Sources for an Organizational Assessment
M&E Question(s) Objective(s) Indicator Method(s) Data Sources
1. Did financial and human 1. Determine whether 1. Amount of budgetary 1. Records review (organization 1. Accounts, budgets, annual
resource inputs change capacity-building resources by source over and donor) reports
over time? interventions increased time
budgetary resources of the 2. Record review of personnel 2. Personnel records, annual
2. Did the source of organization and the 2. Number of management resources reports
financial resources number of trained and staff positions filled
change over time? personnel. over time 3. Interviews with senior
management in organization 3. Finance manager,
2. Determine whether and donors/NGOs accountant, donor/NGO
change in reliance on representative
donor/NGO funding has
decreased.
1. Did the organization 1. Determine the extent of 1. Number of joint activities 1. Prospective recording of links 1. Record forms
establish new networking and its effect with other organizations to other organizations
relationships or improve on organizational
links with other behavior. 2. Frequency of contact with 2. Interviews with management 2. Questionnaire and focus
organizations that higher and lower level and staff groups
contributed to achieving organizations within public
performance objectives? sector 3. Facility survey (observation, 3. Survey data
exit interviews, provider
3. Types and frequency of interview, inventory) 4. Organizational
outcomes from links with networking analysis
other organizations
analyzed by organization
type (public or private)
1. Did staff capacity to 1. Determine the 1. Client satisfaction index 1. Facility survey (exit 1. Survey data
assess client needs effectiveness of training interviews, provider interview)
improve? and mentoring. 2. Provider satisfaction index 2. Focus group data
2. Client focus groups

3. Provider focus groups


1. Did staff capacity to 1. Determine the 1. Client satisfaction index 1. Facility survey (exit 1. Survey data
meet client needs effectiveness of training interviews, provider interview)
improve? and mentoring. 2. Provider satisfaction index 2. Focus group data
2. Client focus groups

3. Provider focus groups

60
Table 7: Capacity Measurement Tools
Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools
Enhancing Organizational IDRC Organization Qualitative and External and Multiple Measures the results of an organization’s programs, prod-
Performance: A Toolbox for quantitative self-assessment ucts and services and then integrates these results with the
Self Assessment techniques of formative assessment in which the assessment
http://www.idrc.ca team becomes involved in helping the organization meet its
goals.
Outcome Mapping: A Method IDRC System Qualitative and Self-assessment Multiple
Outcome Mapping characterizes and assesses the contribu-
for Reporting on Results Organization quantitative
tions a project or organization makes to significant and
http://www.idrc.ca/telecentre/
lasting changes (outcomes). In Outcome Mapping a pro-
evaluation/html/29_Out.html
gram is assessed against its activities that contribute to a
desired outcome, not against the outcome itself.
Integrated Health Facility BASICS Organization Quantitative External as- Multiple This manual outlines the key steps for planning and con-
Assessment (IHFA) assessment sessment ducting an integrated health facility assessment at outpatient
http://www.basics.org/publica health facilities in developing countries. This assessment is
tions/pubs/hfa/hfa_toc.htm designed for use by primary health care programs that are
planning to integrate child health care services.
Management and Organiza- Family Planning Organization Qualitative Self-assessment Single The Management and Organizational Sustainability Tool
tional Sustainability Tool Management (MOST) is a package (instrument and user's guide) designed
(MOST) Development to facilitate management self-assessment and to support
http://erc.msh.org/mainpage.c (FPMD)/ management improvement. MOST uses an instrument to
fm?file=95.40.htm&module=t MSH help focus an organization on the actual characteristics of
oolkit&language=English their management, identify directions and strategies for
improvement, and set priorities for the management devel-
opment effort.
Management Development FPMD/MSH Organization Quantitative Self-assessment Single This tool includes four steps: 1) develop a preliminary
Assessment (MDA) management map to guide assessment; 2) develop and
http://erc.msh.org/mainpage.c administer MDA questionnaire to collect information on the
fm?file=95.50.htm&module=t management capabilities of organization; 3) analyze survey
oolkit&language=English results and develop a post-survey management map; and 4)
develop action plan for making improvements.
The Child Survival Child Survival System Qualitative and Self and internal Multiple Evaluation framework to systematically measure progress
Sustainability Assessment Technical (local) quantitative client assess- toward sustainable health goals. Process that projects can
(CSSA) Support (CSTS) Organization ment use to lead a participatory assessment with communities and
http://www.childsurvival.com Project/ORC Community local partners.
MACRO
The Institutional Strengths CSTS System Qualitative and Self and internal Multiple This self-assessment tool is currently being pilot tested by
Assessment (ISA) Tool Project/ORC (local) quantitative client CSTS.
http://www.childsurvival.com/ MACRO Organization assessment
tools/project_planning.cfm

Monitoring and Evaluating Capacity-Building Interventions 61


Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools

INTRAH/PRIME INTRAH/ Organization Qualitative and Self and Multiple The framework and tool developed at the end of PRIME I
Capacity Building In Training PRIMEII quantitative internal client has been used to aid program evaluation in different
Questionnaire assessment countries (e.g., Mexico, Ghana, India, and Bangladesh),
http://www.prime2.org/prime when interventions have focused on the strengthening of
2/techreport/home/50.html training and service delivery institutions. The tool
encourages organizations to discover root causes of
obstacles with a sustainable effort to build capacity in the
organization to recognize, address, analyze and prioritize
problems.
Client-Oriented Provider Engender Health Organization Qualitative and Self-assessment Multiple COPE encourages and enables service providers and other
Efficient (COPE®) quantitative staff at a facility to assess the services they provide jointly
http://www.engenderhealth.or with their supervisors. Using various tools, they identify
g/ia/sfq/qcope.html problems, find the root causes, and develop effective
solutions.
Note: COPE has now been
adapted for use with maternal
health services and commu-
nity partnership
http://www.engenderhealth.or
g/news/newsreleases/020516.
html
Transformational Develop- World Vision Community Qualitative and External and Multiple Provides technical guidance for measuring the
ment Indicators Field Guide quantitative self-assessment Transformational Development Indicators. It includes 8
http://www.worldvision.org volumes that cover indicator definitions and methods for
NOTE: Tool not yet available collecting, analyzing, and reporting on the indicators.
online
Communication for Social Center for Community Qualitative and External and Multiple Presents model, process and outcome indicators, and some
Change: An Integrated Model Communications quantitative self-assessment data collection and analytical tools for use by communities.
for Measuring the Process and Programs
Its Outcomes (CCP)/Johns
http://164.109.175.24/Docum Hopkins
ents/540/socialchange.pdf University

62
Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools
Assessing Institutional Ca- CCP/Johns Organization Quantitative External and Multiple Scores organizational competence, commitment, clout,
pacity in Health Communica- Hopkins self-assessment instruments coverage and continuity.
tion: A 5Cs Approach University
Work in Progress.
http://www.jhuccp.org
Management/Financial PASCA Organization Quantitative External and Single Tools are in Spanish only.
Sustainability Scale (MFSS) self-assessment instrument
http://www.pasca.org
Systematic Approach Scale PASCA Organization Quantitative External and Single Tools are in Spanish only.
(SAS) self-assessment instrument
http://www.pasca.org
Institutional Assessment World Learning Organization Qualitative and External Multiple Provides a framework for assessing the institutional needs
Instrument (IAI) Project Inc. quantitative assessment instruments of a single organization or a community of organizations.
http://www.worldlearning.org Pinpoints six key areas generally agreed to be the compo-
or nents of effective institutions.
http://www.worldlearning.org/
pidt/docs/wl_instcape.pdf

Institutional Development SFPS Organization Qualitative and External Multiple Documents existing capacity and identifies potential areas
Assessment (IDA) quantitative assessment instruments of collaboration and capacity building in overall dimensions
http://www.fha- of management, financial management and technical capac-
sfps.org/documentsdownload/ ity.
Institu-
tional%20Development%20A
ssessments.PDF
Organizational Capacity Pact/Ethiopia Organization Quantitative Self-assessment Multiple A methodology for organizational capacity assessment and
Assessment Tool (OCAT) instruments strengthening that helps organizations anticipate and over-
http://www.pactworld.org come the greatest barriers to organizational change and
growth. Through a guided self-assessment and planning
process, organizations reflect upon their performance and
select the tools and strategies they need to build capacity
and broaden impact. A four-staged process that includes:
Participatory tool design; guided self-assessment; data-
guided action planning; reassessment for continual learning
that allows organizations to monitor change, track the effec-
tiveness of their capacity-building efforts, and integrate new
learning as their needs change and capabilities increase.

Monitoring and Evaluating Capacity-Building Interventions 63


Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools
Participatory, Results- Education Organization Qualitative and Self-assessment Single Participatory Organizational Evaluation Tool (POET) is an
Oriented, Self-Evaluation Development quantitative instrument organizational capacity assessment tool used to measure and
(PROSE) Center and PACT profile organizational capacities and consensus levels in
seven critical areas and assess, over time, the impact of
SEE POET at: these activities on organizational capacity (benchmarking).
http://www.undp.org/csopp/po POET is based on a methodology called PROSE.
et.htm

PROSE stands for Participatory, Results-Oriented, Self-


Evaluation, a new methodology for assessing and enhancing
organizational capacities. PROSE is designed for use by
service organizations, schools, and government units. It is
suitable for assessing capacity and catalyzing organizational
change in relation to such concerns as: practices related to
exceeding customer expectations, organizational effective-
ness in achieving mission, community participation, equity,
decentralization, and managerial effectiveness.
National program effort The Futures System Quantitative and External Single Each index measures national level effort and identifies
indices Group/ (national) qualitative assessment instrument strengths and weaknesses of those efforts.
Family Planning Effort Index Population Organization
(FPEI) Council
http://www.agi-
usa.org/pubs/journals/271190
1.pdf
The AIDS Program Effort
Index (API)
http://www.policyproject.com
/pubs/countryreports/api.pdf

64
STEP 6 Develop an screened through the mental models of the
Implementation and participants to acquire any diagnostic value.”
Dissemination Plan
When developed before the evaluation begins,
The final step in planning for capacity- a dissemination strategy guides data collec-
building M&E is to develop an implementa- tion and analysis. Developing a format for
tion plan to monitor and evaluate capacity. At presentation of the results to the appropriate
a minimum, the implementation plan should audience identifies weaknesses and gaps in
include a timetable for data gathering and the evaluation plan. It also helps to guide the
review of data, individual responsibilities, a direction of the evaluation by emphasizing
dissemination strategy, and a budget. In prac- what is needed for addressing the needs of the
tice, capacity measurement, as a reflection of data users and raising awareness of possible
capacity development, is likely to be an itera- sensitivities. Gaps or excess data collection
tive process rather than a perfunctory “before becomes obvious, and further refinement of
and after” look at capacity. Experienced the number or type of indicators being meas-
evaluators (Horton et al, 2000; Lusthaus, ured is often necessary. In the process, evalu-
1999; Earl et al., 2001; Morgan, 1997) rec- ators identify all key stakeholders that should
ommend regular review and discussion of be alerted to the results, if they are not di-
monitoring results with stakeholders to guide rectly involved in the evaluation itself. The
the process of capacity development and en- recommended forum for disseminating results
courage ownership of the monitoring process. is one that promotes discussion and interac-
Setting aside enough time to present the re- tion among the key stakeholders and those in
sults periodically and allow for discussion and a position to influence the future direction of
feedback from the stakeholders will greatly the capacity-building efforts. Sufficient funds
enhance data interpretation and the impact of must be set aside so that all those who make a
the evaluation itself. As Morgan (1997) notes, credible contribution to the evaluation receive
“Indicators by themselves provide few an- at least summary results in a timely and rele-
swers. The information they produce must be vant fashion.

Monitoring and Evaluating Capacity-Building Interventions 65


Part 4 Summary Checklist: Steps for Designing a Capacity-Building
M&E Plan

This guide is designed to assist manager and pacity-building or performance im-


evaluators working in international health- provement intervention.
sector capacity development to q Capacity-building M&E can be used
¨ gain a clear understanding of the con- internally to improve capacity develop-
cepts of capacity and capacity building ment interventions or to report results to
¨ critically evaluate the strengths and external stakeholders. While these two
limitations of current approaches to ca- purposes are not mutually exclusive,
pacity measurement managers should understand the benefits
¨ design a capacity-building M&E plan and drawbacks of emphasizing one ob-
that outlines a systematic approach to jective at the expense of the other.
measuring capacity and assessing the re-
sults of capacity-building interventions Define performance objectives (Step 2)
in the health sector q Capacity is a prerequisite for perform-
ance. Evaluators must clearly state the
The manual presents a discussion of the performance objectives of a capacity-
concept of capacity and capacity building, building intervention at the outset of
and the influence of attributes of capacity on M&E planning and understand the pro-
M&E approaches. It outlines a conceptual gram’s approach to improving perform-
framework for understanding the role that ance.
capacity plays in enabling performance in q Performance objectives can be expressed
the health sector and suggests an approach as variables or indicators that can be
to identifying key factors that influence ca- measured against international or na-
pacity and performance. Finally, it outlines tional standards or locally determined
some basic steps for capacity-building M&E expectations. Normally, the definition of
that result in a plan for evaluating a specific performance objectives reflects both ex-
capacity-building intervention. These steps ternal and internal criteria.
are summarized in the checklist that follows.
Mapping capacity: Build a conceptual
framework for the specific capacity-
Checklist: Steps in Designing a building intervention (Step 3)
Capacity-Building M&E Plan q Capacity mapping is a structured process
The Guide recommends a six-step approach of “thinking through” the role capacity
for developing an M&E plan for capacity plays in ensuring performance by devel-
building. The key components of each step oping a conceptual framework that is
are outlined below. specific to a particular capacity-building
intervention. Mapping identifies key
Define the purpose of the evaluation factors of capacity and assumptions
(Step 1) about how they interact to influence ca-
pacity and performance. If program
q Evaluators and program planners should planning and M&E design are conducted
work with key stakeholders to develop simultaneously, capacity mapping can
an M&E plan during the design of a ca-

Summary Checklist: Steps for Designing a Capacity-Building M&E Plan 67


contribute to the choice of intervention jectives as well as particular capacity-
strategies and to the M&E strategy. building activities.
q The external or operational environment
may have a considerable effect on the Identify appropriate methodological ap-
pace, process, outcome, and sustainabil- proach and sources of data (Step 5)
ity of capacity development. It is advis- q All capacity-building programs need to
able for program managers to track envi- be monitored to ensure they are working
ronmental changes periodically. well (i.e. to track changes in inputs, pro-
q Each type of mapping (single-level or cesses, outputs and outcomes). However,
multiple-level) can be done in two or the evaluation of program effectiveness
three iterations. The first iteration of a happens less frequently and only for se-
map should attempt to provide a full list lected interventions due to cost and
of capacity variables that may influence complexity.
capacity outcomes and performance. It q Impact evaluation is not advisable in
should present capacity variables in a capacity-building M&E since capacity
general way. The second or third itera- measures are not easily quantified, and
tion of a map should be more precise in identifying similar organizations or sys-
depicting the variables to be monitored tems to facilitate comparison (as in a
over the course of the intervention. case-control study) is difficult.
q Capacity mapping is sometimes con- q Capacity measurement tools should be
fused with Performance Improvement able to capture different stages of devel-
(PI). & For clarification, the reader is opment of communities, health person-
referred to the definition of PI in the nel, organizations, or health systems.
Glossary and the table in Annex D. q M&E tools are needed that allow for
repeated measures to capture the interim
Identify capacity indicators (Step 4) steps in capacity-building processes as
q Well-defined indicators provide a refer- well as trends in outcomes.
ence framework for guiding all q Capacity building should be a self-
stakeholders toward the same goals. In- motivated and self-led process of
dicators also allow for standardized change. Evaluation strategies that use
measurement of change during imple- self-assessment techniques and locally
mentation, which enables evaluators to determined benchmarks of progress in-
understand the process of capacity de- spire ownership of capacity development
velopment over time and its relationship and increase the likelihood that evalua-
to capacity-building intervention. tion results will be used. However, there
q Capacity indicators generally project an can be a cost to this approach in terms of
aspiration or a sought-after state or abil- the perceived validity of findings.
ity. They capture the current “stock of q In the design of capacity-building M&E
resources available” for various uses, or strategies, evaluators are advised to use
an individual or organizational behavior multiple data-collection instruments,
that puts those resources into action combine qualitative and quantitative
(Moore et al., 2001). methods, address more than one level of
q When selecting capacity indicators it is capacity and relations between levels,
advisable to be clear about specific per- include self-assessment techniques in
formance and capacity development ob- combination with external or standard-

68
ized methods, triangulate methods and
data sources, and use data interpretation
workshops.

Develop an implementation and dissemi-


nation plan (Step 6)
q In disseminating results evaluators
should review findings regularly, and
discuss them with stakeholders to guide
capacity development and encourage
ownership of the M&E process.

Summary Checklist: Steps for Designing a Capacity-Building M&E Plan 69


Annex A Example of Scoring Used for Measuring Capacity Building in Training, PRIME I
(Fort, 1999)

Dimensions Objectives Indicator Scoring


I. Legal/Policy Support National FP/RH service guidelines 1. Existence of updated official 0=Nonexistent guidelines (both service
and training are official FP/RH service and training and training), to
guidelines 4=Complete/updated, disseminated,
and official guidelines
Political support for training 2. Official (written) policy 0=Nonexistent written policy to
institutionalization supporting institutional training 4=Written/updated, disseminated, and
capacity - e.g., training units, official
cadre of master trainers, venues,
etc. - for health providers
3. Favorable public statements on 0=No mention, to
FP/RH training (for the 4=Mentioned on several private and at
improvement of services) at least least twice on public occasions
twice a year by senior officials
II. Resources Financial 4. </= 20% of training budget 0=No in-country training budgets;
Existence of sufficient and comes from external assistance funds are allocated on ad hoc basis, to
diversified training budget 4=20% or more of training budget
comes from external assistance
5. Budget covers all aspects of 0=Budget does not cover all aspects of
training (including materials and training, to
equipment, travel and per diem by 4=Budget covers all training costs
consultants and staff, venue hire
and maintenance, etc.)
Venues/Equipment 6. Accessible and available (own, 0=Nonexistent venue, (incrementally
Adequate venues rented) venues (at least one local scoring coverage, capacity, and/or
venue in each training area) of quality of venue), to
standard quality (continuous 4=Fully accessible, high-quality, and
power, food, lighting, acoustics, sufficient-capacity local venue for
and sufficient capacity), accessible training events
to participants, and available when
needed

Example of Scoring Used for Measuring Capacity Building in Training, PRIME 1 71


Dimensions Objectives Indicator Scoring
Materials, equipment, and 7. MES are pertinent, updated, 0=MES are insufficient and/or
supplies (MES) sufficient, and adapted to local outdated, to
Appropriate and cost-efficient culture (including locally 4=MES of standard technical and
MES, (including AV equipment produced) material quality and readability are
and teaching aids) available for each event participant
Systems are in place for 8. Financial, printing and planning 0=There are no or insufficient means
replacement and upgrading of capabilities exist for replacing and for replacing MES, to
MES upgrading MES 4=The means exist to produce, replace
and upgrade MES
Human 9. Trainers/preceptors are 0=Trainers/preceptors not regularly
Trainers/preceptors formed have constantly formed (TOT) and do formed and/or do not update their
updated and standardized periodic refresher courses and technical and presentation knowledge
technical and presentation pass standard tests on FP/RH and skills, to
knowledge and skills technical and presentation 4=Trainers/preceptors constantly
knowledge and skills formed and undergoing periodic (at
least once every two years) refresher
courses
III. Training Plans and Updated and periodically 10. Training plan exists and is 0=No training plan performance
Curriculum reviewed training plan reviewed annually (training conducted on ad hoc basis), to
4=Training plans are drawn
periodically (at least annually) and
reviewed
Updated curriculum is official 11. Existence of a standard official 0=No standard training curriculum or
standard for training institutions training curriculum guiding curriculum is inadequate / outdated,
training institutions different ones used by different
institutions, to
4=There is a standard curriculum,
reviewed periodically (at least once
every 2 years) and used officially by
training institutions

72
Dimensions Objectives Indicator Scoring
IV. Organization Leadership 12. Training plans are linked with 0=Providers’ training plans are not
Vision of training as a means to quality of care and increased coupled with service and quality of
improve services service access care objectives, to
4=Training plans form part of the
quality of care and service
improvement strategies
Training is an integral part of 13. A training plan and activities 0=Training is not part of the
organization’s strategic planning are part of the organization’s organization’s strategic plan, to
strategic plans 4=Training is part of the organization’s
long-term strategic plan (multiannual)
Promotion of public-private 14. Evidence of public-private 0=No evidence of public-private
collaboration collaboration collaboration, to
4=Evidence of public-private
collaboration
Infrastructure 15. Active training units exist at 0=No decentralized training units (even
Existence of decentralized training central and peripheral levels if there is one at central level), to
units in all areas 4= Active training units in central and
peripheral levels
Human resource development 16. HR development is part of a 0=Training is not coupled with
HR training (TOT, formative and performance improvement (PI) providers’ improvement objectives, to
refresher courses) is an integrated strategy 4=Training is part of HR development
part of a Performance and performance
Improvement system (e.g.,
incentives, follow-up and
supervision, efficacy)
Administrative 17. Existence and use of a 0=No TNA customarily done, to
Existence of a reporting system Training Needs Assessment 4=TNA is integral and continuous part
for tracking number and of training strategy
characteristics of trainees and
materials, according to needs 18. Existence of an MIS for 0=No MIS for tracking progress, to
trainees and materials matching 4=MIS for training
TNA

Example of Scoring Used for Measuring Capacity Building in Training, PRIME 1 73


Dimensions Objectives Indicator Scoring
Technical capability 19. Contacts with other training 0=No/little use of evaluation and
Technological transfer and institutions and institution’ research of information from other
development through networking, evaluation and research feed into training institutions to improve, update
evaluation, and research training improvement (e.g., trainee training capabilities, to
selection, training contents and 4=Extensive use of internal and
formats) external data and resources for
improvement
Track record 20. Replica/other courses carried 0=No replica or independent courses
Proven capacity to out independently (with carried out by the organization (or only
conduct/replicate courses institutional resources) done with foreign assistance), to
autonomously 4=Evidence of ongoing
replication/expansion of courses with
institutional resources
V. Community Development Community representatives are 21. Evidence of community 0=No/little community involvement, to
-Participation involved in planning and involvement in providers’ training 4=Extensive involvement /
execution of training activities, are and/or performance assessment participation in provider training and/or
aware of their rights, and/or (e.g., quality of care circles) performance assessment; organized
demand competent provider demand/petitions to improve services,
performance etc.

74
Annex B Example of Results of PRIME Training Capacity Index (Catotti, 1999)

El Salvador Capacity Building

100.0
56.7

35.8

1997
Score

10.0
1999

3.7 3.7 3.7 4.0 3.7


3.2 3.5 3.2 3.3
3.0 2.8 2.8 2.8 2.8
2.3 2.3 2.5 2.5
2.0 2.2 2.2 2.2 2.0 2.0 2.0 2.0
1.7 1.8 1.8
1.51.5 1.7 1.7 1.7 1.5
1.3
1.0 1.0 1.0 1.0
1.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
1997-99 Scores for each of the 20 Indicators and Average Score (21) - Logarithmic Scale

Note: See Annex A for definitions of indicators.

Example of Results of PRIME Training Capacity Index 75


Annex C Key Internet Resources for Monitoring and Evaluating
Capacity-Building Interventions

There is a wealth of information on capacity measurement and evaluation in general on the


Internet. The list that follows describes Internet sites that focus specifically on capacity meas-
urement; it also includes sites that provide general evaluation information and resources. The
details of many of the capacity measurement tools found on these sites are found in Table 7 in
Part 3 of the Guide. Please note that inclusion on the list does not imply any judgment about any
item listed or not listed.

Capacity Measurement Sites

1. The Manager’s Electronic Resource Center – Management Sciences for Health

http://erc.msh.org/
http://www.msh.org/

The Health Manager's Toolkit is an electronic compendium of tools designed to assist health
professionals at all levels of an organization to provide accessible, high-quality, and sustainable
health services. It is particularly useful for managers who lead others to produce results.

The Health Manager’s Toolkit includes spreadsheet templates, forms for gathering and analyzing
data, checklists, guidelines for improving organizational performance, and self-assessment tools
that allow managers to evaluate the systems underlying their entire organization. The tools have
been developed by organizations working throughout the world to improve delivery of health
services.

For more information, contact Gail Price or Amanda Ip by e-mail ([email protected]).

2. INTRAH/Prime II

http://www.prime2.org/

The PRIME II Project is a partnership combining leading global health care organizations dedi-
cated to improving the quality and accessibility of family planning and reproductive health care
services throughout the world. Funded by USAID and implemented by the University of North
Carolina at Chapel Hill School of Medicine, PRIME II focuses on strengthening the performance
of primary care providers as they work to improve services in their communities. To accomplish
its goals, PRIME II applies innovative training and learning and performance improvement ap-
proaches in collaboration with host-country colleagues to support national reproductive health
goals and priorities.

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 77


Since 1997, The PRIME Project has been committed to applying the guiding principles of per-
formance improvement (PI) to real-world reproductive health contexts. Work in Yemen, Burkina
Faso, the Dominican Republic, and India indicates that PI users like the clear, highly participa-
tory process and the focus on cost-effective interventions to address the most important problem
areas.

This interactive Website, created by the PRIME II Project and INTRAH, presents a revised edi-
tion of Performance Improvement Stages, Steps and Tools, first issued in print form in 2000.
INTRAH/PRIME II published this site online in August 2002 (www.intrah.org/sst/).

For more information, please contact Marc Luoma by email ([email protected]).

3. JHPIEGO

http://www.jhpiego.org

Through advocacy, education and performance improvement, JHPIEGO helps host-country poli-
cymakers, educators and trainers increase access and reduce barriers to quality health services,
especially family planning and maternal and neonatal care, for all members of their society.
JHPIEGO’s work is carried out in an environment that recognizes individual contributions and
encourages innovative and practical solutions to meet identified needs in low-resource settings
throughout Africa, Asia, and Latin American and the Caribbean.

TIMS is a computer-based tool to track and monitor training efforts. Each person’s skills, quali-
fications, and location are stored, along with courses taken and taught, through a Microsoft Ac-
cess 2000 database application that stores information about training course content, timing, par-
ticipants, and trainers. In the standard form, TIMS tracks the following training results over a
period of time:
- Which providers from which service sites have been trained, and in what topic(s)
- Which trainers have been conducting courses, and how many people they have trained
- How many courses have been held, summarized by training center, district, or province

TIMS allows senior and mid-level program managers to monitor the variety of training activities
and track results in a number of perspectives. TIMS is designed to be part of a country’s training
information system, replacing paper-based reporting and aggregation with a computer database.
Ministries of Health, Planning and/or Finance can use TIMS to supplement service information
for policy decisions on training, retraining, and provider deployment.

For additional information about TIMS, contact Catherine Schenck-Yglesias by e-mail


([email protected]).

78
4. Child Survival Technical Support Program (CSTS)

http://www.childsurvival.com/

The Child Survival Technical Support Project (CSTS) assists PVOs funded through the Office of
Private and Voluntary Cooperation's Child Survival Grants Program. The technical support
CSTS provides to PVOs is targeted specifically towards increasing their capacity to achieve
sustainable service delivery in public health interventions.

The program’s goal is to help these organizations grow and to develop successful programs that
will continue to serve mothers, children, and communities even when the PVO is no longer pres-
ent in the area.

5. International Development Research Centre-Canada (IDRC)

http://www.idrc.ca/

The International Development Research Centre (IDRC) is a public corporation created in 1970
to help developing countries find long-term solutions to the social, economic, and environmental
problems they face. IDRC’s Evaluation Unit has been working in the area of organizational as-
sessment for over 5 years and has developed a number of tools, including: Enhancing Organiza-
tional Performance, a guidebook that presents an innovative and thoroughly tested model for
organizational self-assessment. The tools and tips presented in the guidebook go beyond meas-
uring the impact of programs, products, and services to integrate techniques of formative as-
sessment, in which the assessment team becomes involved in helping its organization become
more effective in meeting its goals. The tools and techniques are flexible, and the model can be
adapted to any type or size of organization. Worksheets and hands-on exercises are included.

Enhancing Organizational Performance will be useful to any organization that is initiating a


process of self-assessment, internal change, or strategic planning. It will appeal particularly to
heads and staff of research organizations, university administrators, staff of research-granting
agencies, and academics and professionals in organizational development and evaluation.

6. International Institute for Sustainable Development (IISD)

http://iisd1.iisd.ca/measure/

IISD has been working on measurements and indicators since 1995, with the aim of making sig-
nificant local, national, and international contributions, and building the Institute into a world
center of expertise in this field. One of IISD’s strategic objectives is to develop robust sets of
indicators for public and private sector decision-makers to measure progress toward sustainable
development and to build an international consensus to promote their use.

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 79


7. World Health Organization (WHO)

http://www.who.int/whr2001/2001/archives/2000/en/index.htm

World Health Report 2000. Health Systems: Improving Performance


The World Health Report 2000 aims to stimulate a vigorous debate about better ways of meas-
uring health system performance and thus finding a successful new direction for health systems
to follow. By shedding new light on what makes health systems behave in certain ways, WHO
also hopes to help policymakers weigh the many complex issues involved, examine their options,
and make wise choices.

8. USAID – Development Experience Clearinghouse (DEC)

http://www.dec.org/

The DEC includes Evaluation Publications such as the TIPS series, which provides guidance on
using the Results Framework, measuring institutional capacity and general quality of indicators
and performance measures.

9. Pact

http://www.pactworld.org/services/oca/index_oca.htm
http://www.pactworld.org/

Pact’s unique methodology for organizational capacity assessment and strengthening (OCA)
helps organizations anticipate and overcome the greatest barriers to organizational change and
growth. Through a guided self-assessment and planning process, organizations reflect upon their
performance and select the tools and strategies they need to build capacity and broaden impact.
Pact's OCA is the product of ten years of research and field practice in partnership with the Edu-
cation Development Center and USAID’s Office of Private & Voluntary Cooperation. Hundreds
of local and international NGOs, private-sector corporations, and municipal governments around
the world have used this methodology.
OCA is a four-staged process that includes:
· Participatory tool design that empowers organizations to define the critical factors that
influence their performance and to identify relevant indicators for evaluating their com-
petency.
· Guided self-assessment that leads employees, board members, and constituents through
structured discussions followed by individual scoring on a series of rigorous performance
indicators.
· Data-guided action planning that provides organizations with an opportunity to interpret
the self-assessment data and set change strategies most appropriate to their environment.

80
· Reassessment for continual learning that allows organizations to monitor change, track
the effectiveness of their capacity-building efforts, and integrate new learning as their
needs change and capabilities increase.
For more information on Pact’s Organizational Assessment, please contact Betsy Kummer by
email ([email protected]).

Publications Available from Pact


www.pactpublications.org

From the Roots Up: Strengthening Organizational Capacity through Guided Self-Assessment
by World Neighbors
Publisher: World Neighbors
Year: 2000

Basic Guide to Evaluation for Development Workers


by Frances Rubin
Publisher: Oxfam
ISBN: 0-85598-275-6
Year: 1995
This book will help groups to plan for and carry out evaluations as an integral part of develop-
ment activities. Easy to follow, it focuses on the principles underlying evaluation and deals
clearly and simply with the issues to be considered at the planning stage. It then examines the
steps involved in carrying out different types of evaluation, for specific purposes. The impor-
tance of involving local people in evaluations is emphasized throughout.

Participatory Monitoring, Evaluation and Reporting: An Organisational Development Perspec-


tive for South African NGOs
by Pact
Publisher: Pact Publications
Year: 1998
This manual explains why participation is important and how to achieve effective stakeholder
participation; the role of monitoring in sustaining progress toward better organizational effec-
tiveness; how evaluation helps an organization to assess its capacity; and the critical role of re-
porting to stakeholders. It then deals with applying the Organizational Capacity Assessment Tool
(OCAT) in practice, together with examples. A step-by-step guide to designing and implement-
ing a Participatory Monitoring, Evaluation and Reporting (PME&R) information system is in-
cluded. Although it has been specifically adapted for use by South African NGOs, NGOs can use
OCAT in other countries.

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 81


10. The International HIV/AIDS Alliance

www.aidsalliance.org/ngosupport

The AIDS Alliance has developed an HIV/AIDS NGO/CBO Support Toolkit that is available on
their Website or by CD-Rom with over 500 downloadable resources and supporting information.

The toolkit includes practical information, tools and example documents to help those working to
establish or improve NGO/CBO support programs. The toolkit also describes key components of
NGO/CBO support programming, based on the Alliance's experience. It also includes resources
from a wide range of other organizations to bring different perspectives and experiences to-
gether.

The HIV/AIDS NGO/CBO Support toolkit has been developed for those wishing to establish or
improve NGO/CBO support programs. The toolkit will be useful both for NGO-led support pro-
grams and for government-led or multi-sectoral programs, especially in the context of Global
Fund and World Bank financing for NGOs and CBOs working on AIDS. The toolkit will also be
useful to organizations that provide only funding or only training.

Order single or bulk copies of the CD-ROM and supporting publication free of charge from:
[email protected]

11. International NGO Training and Research Centre (INTRAC)

http://www.intrac.org/

International NGO Training and Research Centre (INTRAC) provides support to organizations
involved in international development. Their goal is to improve the performance of NGOs by
exploring relevant policy issues and by strengthening NGO management and organizational ef-
fectiveness.

Documents can be ordered through their Website including:

Practical Guidelines for the Monitoring and Evaluation of Capacity-Building: Experiences from
Africa
ISBN: 1 897748-64-7
OPS No. 36, November 2001.

Capacity building and monitoring and evaluation have become two of the most important priori-
ties of the development community during the last decade. Yet they have tended to operate in
relative isolation from each other. In particular, capacity-building programs have been consis-
tently weak in monitoring the impact of their work. This publication aims to help NGOs and do-
nors involved in capacity building to develop appropriate, cost-effective and practical systems
for monitoring and evaluation. While not under-estimating the complexity of these tasks, this
publication puts forward some practical guidelines for designing monitoring and evaluation sys-
tems based on experiences with three organizations in different parts of Africa.

82
12. Performance Improvement in Healthcare

http://www.picg.net/

This Website is designed to provide information, tools, and guidelines for planning, implement-
ing, monitoring and evaluating performance improvement processes and activities in health
services delivery organizations. The site is especially tailored for managers, leaders, providers
and other employees working in international health organizations and institutions, whether they
are health ministries or health departments in the public sector or NGOs in the private non-profit
sectors. The site is also for those working as partners with people in these institutions.

Performance Improvement (PI) is a process for enhancing employee and organizational perform-
ance that employs an explicit set of methods and strategies. Results are achieved through a sys-
tematic process that considers the institutional context; describes desired performance; identifies
gaps between desired and actual performance; identifies root causes; selects, designs and imple-
ments interventions to fix the root causes; and measures changes in performance. PI is a continu-
ously evolving process that uses the results of monitoring and feedback to determine whether
progress has been made and to plan and implement additional appropriate changes.

The goal of PI is to solve performance problems or realize performance opportunities at the or-
ganizational, process or systems and employee levels in order to achieve desired organizational
results. The overall desired result in our field is the provision of high quality, sustainable health
services.

The Website includes information on the performance improvement process and factors affecting
worker performance, PI tools, and experiences using PI in different health care settings,

For more information or questions email [email protected].

13. Capacity.org

http://www.capacity.org/index_en.html

Capacity.org is a Website dedicated to advancing the policy and practice of capacity building in
international development cooperation. Issue 14 of the web-based magazine Capacity.org pres-
ents highlights of the UNDP initiative on capacity building and related information on the policy
and practice of capacity building in international development cooperation (also see UNDP web-
site at http://www.undp.org/dpa/publications/capacity.html).

14. ISNAR/CGIAR - Evaluating Capacity Development in Research & Development Or-


ganizations:

http://www.isnar.cgiar.org/ecd/index.htm

This site promotes the use of evaluation as a tool to advance the development of organizational
capacity and performance. Its main purpose is to support a group of managers and evaluators

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 83


who are evaluating capacity development efforts in their own organizations in Africa, Asia and
Latin America. This site presents the work of a global project, "Evaluating Capacity Develop-
ment Project (The ECD Project)." National and international research and development organi-
zations are participating in the ECD Project, which is supported by five donor agencies and co-
ordinated by ISNAR.

The site features the ECD Project's activities since 2000 and its result to date. It provides access
to project reports and events. Lists of useful concepts and terms, bibliographic references and
Internet resources are also provided for use by capacity developers and evaluators

15. Reflect-Learn.org - The Organizational Self-Reflection (OSR) Project

http://www.reflect-learn.org/

The Organizational Self-Reflection (OSR) project aims to improve organizational learning by


increasing access to self-reflection tools. The process of reflection implies an organizational di-
agnosis that will allow learning from experiences, styles of work and results in order to foster
strategic vision, decision making, organizational change and capacity building. The organization
keeps control over orientation of the process and use of results.

The project links a direct service, based on the Internet, and a research agenda designed to create
knowledge about self-reflection and its contribution to organizational learning. The OSR project
seeks to engage diverse organizations in the use of self-reflection resources and also catalyzes
the development of a learning community that focuses on OSR, organizational learning, and the
use of the Internet for institutional strengthening. Several useful frameworks and tools for or-
ganizational assessment are presented

16. UNDP United Nations Development Project

http://www.undp.org/dpa/publications/capacity.html

Developing Capacity through Technical Cooperation: Country Experiences provides some con-
crete inputs to rethinking technical cooperation for today’s challenges based on six country
studies – Bangladesh, Bolivia, Egypt, Kyrgyz Republic, Philippines and Uganda.

Capacity for Development: New Solutions to Old Problems, with prominent academics and de-
velopment practitioners as contributors, proposes new approaches to developing lasting indige-
nous capacities, with a focus on ownership, civic engagement and knowledge. It is a contribution
to a process of debate and dialogue around the broader issue of improving effective capacity de-
velopment.

Development Policy Journal is a new forum for presenting ideas on applied policies. The subject
of capacity for sustainable development is addressed in this first issue.

84
17. EngenderHealth

http://www.engenderhealth.org

EngenderHealth works worldwide to improve the lives of individuals by making reproductive


health services safe, available, and sustainable. EngenderHealth provides technical assistance,
training, and information, with a focus on practical solutions that improve services where re-
sources are scarce in partnership with governments, institutions, and health care professionals.

EngenderHealth's trademarked COPE (client-oriented, provider-efficient services) is a set of


flexible self-assessment tools that assist providers and supervisors to evaluate and improve the
care offered in clinic and hospital settings. Using self-assessment, client-interviews, client-flow
analysis and facilitated discussion, staff identify areas needing attention and develop their own
solutions and action plans to address the issues. Originally developed for family planning serv-
ices, COPE has been successfully applied in a variety of healthcare settings all over the world for
over 10 years. With the growing popularity of COPE, healthcare providers from related disci-
plines asked if the tools could be adapted to a wider range of health services. EngenderHealth
has answered the demand by creating these new products: COPE for Maternal Health Services
and Community COPE: Building Partnership with the Community to Improve Health Services.

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 85


General Evaluation Sites

1. American Evaluation Association

http://www.eval.org

The American Evaluation Association, an international professional association of evaluators, is


devoted to the application and exploration of program evaluation, personnel evaluation, evalua-
tion technology and other forms of evaluation.

The American Evaluation Association has a Collaborative, Participatory and Empowerment


Evaluation topical interest group that is dedicated to the exploration and refinement of collabo-
rative, participatory and empowerment approaches to evaluation. You can find more information
about them at: http://www.stanford.edu/~davidf/empowermentevaluation.html

2. Canadian Evaluation Association

http://www.evaluationcanada.ca/

The Canadian Evaluation Association is dedicated to the advancement of evaluation for its
members and the public. This site is also available in French.

3. The Evaluation Center at Western Michigan University

http://www.wmich.edu/evalctr/

The Evaluation Center, located at Western Michigan University, is a research and development
unit that provides national and international leadership for advancing the theory and practice of
evaluation, as applied to education and human services.

4. Essentials of Survey Research and Analysis

http://freenet.tlh.fl.us/~polland/qbook.html

This site contains a complete manual entitled Essentials of Survey Research and Analysis: A
Workbook for Community Researchers, written by Ronald Jay Polland, Ph.D.,1998.

5. German Center for Evaluation (in German)

http://www.uni-koeln.de/ew-fak/Wiso/

This is the homepage for the German Center for Evaluation at the University of Cologne. It in-
cludes the German translation of the Program Evaluation Standards of the American Evaluation
Society.

86
6. Government Performance Information Consultants

http://members.rogers.com/gpic/evalwebindex.htm

This site offers links to many Web resources on evaluation.

7. The Michigan Association for Evaluation

http://www.maeeval.org/

The Evaluation Promotion Committee has compiled a list of resources in an effort to provide
MAE members and others interested in evaluation with sources for educational materials, tools,
and other resources that may be interesting and helpful. For each resource, the site provides a
brief description (generally from the resource itself) and where to find it.

8. Innovation Network, Inc. (InnoNet)

http://www.innonet.org/

Innovation Network, Inc. (InnoNet) is an Innovation Network, a national nonprofit dedicated to


building the evaluation capacity of nonprofits so they can better serve their communities. In-
noNet has two services to meet this end: a search service to find model programs, and an evalua-
tion service that guides agencies through a planning and evaluation process. Description of their
evaluation methodologies and documents available for ordering are listed on this site.

9. International & Cross-Cultural Evaluation Topical Interest Group (I&CCE)

http://home.wmis.net/~russon/icce/

International & Cross-Cultural Evaluation Topical Interest Group is an organization affiliated


with the American Evaluation Association. The purpose of the I&CCE is to provide evaluation
professionals who are interested in cross-cultural issues with an opportunity to share their expe-
riences with one another.

10. MandE News

http://www.mande.co.uk/

MandE News is a news service focusing on developments in monitoring and evaluation methods
relevant to development projects and programs with social development objectives. It is edited
by Rick Davies in Cambridge, UK who can be contacted by email ([email protected]).

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 87


11. Sociometrics

http://www.socio.com/eval.htm

Sociometrics offers a wide variety of evaluation products and services to professionals across the
world. Their evaluation workshops and training services, technical publications, evaluation tools,
and data sets are all designed to assist practitioners, administrators, evaluators, and funders of
social interventions to design and implement successful evaluation systems.

For additional information, contact Dr. Shobana Raghupathy by email ([email protected]) or


by phone at 1.800.846.3475 x209.

12. Bill Trochim, Cornell University

http://trochim.human.cornell.edu/kb/conmap.htm

Bill Trochim is a faculty member at Cornell University; his work in applied social research and
evaluation is described on this site. His published and unpublished papers, detailed examples of
current research projects, useful tools for researchers, an extensive online textbook, a bulletin
board for discussions and links to other websites related to applied social research methods are
included. Concept mapping is a general method that can be used to help individuals or groups to
describe their ideas about some topic in a pictorial form.

13. UNICEF

http://www.unicef.org/reseval/

This site lists some of the monitoring and evaluation tools recently developed by UNICEF and
its partners, including the UNICEF Guide to Monitoring and Evaluation.

14. United Way

http://www.unitedway.org/outcomes/

The United Way’s Resource Network on Outcome Measurement offers a guide to resources for
measuring program outcomes for health, human service and youth- and family-serving agencies.
Their manual, Measuring Program Outcomes: A Practical Approach, can be ordered at the
Website.

88
15. National Science Foundation, Division of Research, Evaluation and Communication
(REC)

http://www.nsf.gov/pubsys/ods/getpub.cfm?nsf97153

This site contains a complete manual, User-Friendly Handbook for Mixed Method Evaluations
(August 1997), edited by Joy Frechtling and Laurie Sharp Westat, and developed with support
from the National Science Foundation, Division of Research, Evaluation and Communication.

Key Internet Resources for Monitoring and Evaluating Capacity-Building Interventions 89


Annex D Capacity Mapping and Performance Improvement Compared
Capacity Mapping Performance Improvement in RH

What is it? Tool for M&E planning (primarily) Tool for improving RH services

What is the purpose? Helps planners and evaluators decide: What M&E approach to Helps managers decide: what PI strategy to use? Did perform-
take to determine whether this strategy succeeded in building ance change as a result of the PI process?
capacity (primary use)? What capacity-building strategy to
use? (secondary use).

Answers the question… What factors of capacity are required for performance? How Is progress being made toward goals? Are appropriate actions
should I measure these factors? being undertaken to promote achieving those goals? What are
the problem areas?

What is the approach? Conceptual: Evaluators are encouraged to consider a wide Focused: Root causes of performance problems are linked to
range of factors that might influence capacity and performance. six performance factors - job expectations; performance feed-
back; workspace, equipment, and supplies; incentives; organ-
izational support; and knowledge and skills.
Guides planners and evaluators in viewing capacity systemati- Guides organizations in viewing problems systematically and
cally and identifying all areas that affect performance. addressing all areas that enhance performance.
Encourages understanding of capacity in the health sector as a Encourages understanding of the organization as a system of
system that includes four interdependent levels: the system, interdependent functions and people.
organizations, health personnel, individuals and communities.

When to use it? Can be used to organize and analyze information before or Used to organize and analyze information before deciding what
after a capacity-building intervention is designed. intervention is needed.

Focus of study/action Applies to systems, organizations, humans, and communities Applies to humans within organizational systems

Who is involved? Encourages stakeholder involvement Encourages stakeholder involvement

View of performance Performance is the result of capacity and context Human performance is a factor of knowledge, skills, capacity
and motives, and context

Capacity Mapping and Performance Improvement Compared 91


Glossary

Capacity is the ability to carry out stated objectives. It has also been described as the “stock of
resources” available to an organization or system as well as the actions that transform those re-
sources into performance.

Capacity building (or capacity development) is a process that improves the ability of
a person, group, organization, or system to meet objectives or to perform better.

Capacity evaluation is normally more complex than monitoring, and is conducted to gain un-
derstanding of the relationship between capacity-building interventions and capacity outcomes,
or the links between capacity and performance variables.

Capacity mapping is a structured process of thinking through the role capacity plays in ensuring
performance by developing a conceptual framework that is specific to a particular capacity-
building intervention. During capacity mapping, all the possible factors of capacity that influence
performance and the relationships between them must be identified. Once the factors are all laid
out, the program staff or evaluator can focus on those that are most essential for the evaluation.

Capacity monitoring normally would be used to understand the effectiveness and efficiency of
a capacity-building intervention during implementation (i.e., is capacity improving and at what
cost?) to contribute to strategic or operational decisions related to capacity building or enable a
periodic look at a program or system.

Cold chain: The system that ensures vaccine viability from manufacturing to delivery.

Contextual factors: external factors relating to the economic, social, cultural and political envi-
ronment. Factors normally outside the control of most health sector actors.

Impact: Long-term results achieved through improved performance of the health system: sus-
tainable health system and improved health status. Impact measures are not addressed in capac-
ity-building M&E.

Impact evaluation: An evaluation that uses experimental or quasi-experimental study design to


attribute changes in capacity or performance to program interventions. Impact evaluation is not
appropriate or useful in the context of capacity-building M&E because of the difficulty of quanti-
fying many elements of capacity and attributing capacity change to any single intervention or
even a range of them.

Input: Set of resources, including service personnel, financial resources, space, policy orienta-
tion, and program service recipients, that are the raw materials that contribute to capacity at each
level (system, organization, health personnel, and individual/community).

Outcome: Set of results that represent capacity (an ability to carry out stated objectives), often
expected to change as a direct result of capacity-building intervention.

Glossary 93
Output: Set of products anticipated through the execution of practices, activities, or functions.

Performance: Set of results that represent productivity and competence related to an established
objective, goal or standard. The four capacity levels together contribute to overall system-level
performance.

Performance Improvement (PI): Performance Improvement (PI) is a process for enhancing


employee and organizational performance that employs an explicit set of methods and strategies.
Results are achieved through a systematic process that considers the institutional context; de-
scribes desired performance; identifies gaps between desired and actual performance; identifies
root causes; selects, designs and implements interventions to fix the root causes; and measures
changes in performance. PI is a continuously evolving process that uses the results of monitoring
and feedback to determine whether progress has been made and to plan and implement additional
appropriate changes.

Process: Set of activities, practices, or functions by which the resources are used in pursuit of the
expected results.

Theory of action: Part of a capacity-building plan that includes common objectives and shared
concepts. A coherent theory of action agreed on by the key groups involved in the process states
how activities are expected to produce intermediate and longer-term results and benefits. “With-
out a theory of action, a capacity development effort could become a fragmented exercise in
wishful thinking, rather than a coherent initiative with a high probability of success” (Horton,
2001).

Triangulation: The use of multiple data sources or methods to validate findings, discover errors
or inconsistencies, and reduce bias.

94
Bibliography

Africa Bureau, Office of Sustainable Development USAID. 1999. Health and Family Planning
Indicators: Measuring Sustainability, Volume II. Washington: USAID.

Ampomah, K. 2000. PRIME’s Technical Report 20: An Assessment of the Impact of PRIME’s
Interventions on the Training Capacity and Reproductive Health Service Delivery in Ghana.
2000. Chapel Hill, NC: INTRAH.

Bertrand, J. and Escudaro, G. 2002. Compendium of Indicators for Evaluating Reproductive


Health Programs, Volume 1. Chapel Hill: MEASURE Evaluation Project.

Brown, L., LaFond, A., Macintyre, K. 2001. Measuring Capacity Building. Chapel Hill:
MEASURE Evaluation Project.

Catotti, D. 1999. PRIME’s Technical Report 13: Improving the Quality and Availability of Fam-
ily Planning and Reproductive Health Services at the Primary Care Level: Institutional Capacity
Building in the El Salvador Ministry of Health. Chapel Hill: INTRAH.

Development Resources Team, World Vision. 2002. Transformational Development Indicators


Field Guide. Washington, DC: World Vision.

Earl, S., Carden, F., and Smutylo, T. 2001. Outcome Mapping: Building Learning and Reflection
into Development Programs. Ottawa: International Development Research Centre.

Eng, E. and Parker, E. 1994. Measuring Community Competence in the Mississippi Delta: The
Interface between Program Evaluation and Empowerment. Health Education Quarterly 21 (2):
199-220.

Figueroa, M.E., Kincaid D.L., Pani, M. and Lewis, G. 2002. Communication for Social Change:
An Integrated Model for Measuring the Process and Its Outcomes. Communication for Social
Change Working Paper Series, No. 1. Baltimore: Johns Hopkins Center for Communications
Programs.

Fort, Alfredo. 1999. PRIME’s Technical Report 16: Capacity Building in Training: A Frame-
work and Tool for Measuring Progress. Chapel Hill: INTRAH.

Franco, L.M., Bennett, S. and Kanfer, R. 2002. Health Sector Reform and Public Sector Health
Worker Motivation: A Conceptual Framework. Social Science and Medicine 54: 1255-1266.

Goodman, R.M., Speers, M.A., McLeroy, K., Fawcett, S., Kegler, M., Parker, E., et al. 1998.
Identifying and Defining the Dimensions of Community Capacity to Provide a Basis for Meas-
urement. Health Educ Behav 25 (3): 258-278.

Gubbles, P., Koss C. 2000. From the Roots Up: Strengthening Organizational Capacity through
Guided Self-Assessment. Oklahoma City: World Neighbors.

Bibliography 95
Horton, D. 2002. Capacity Development in Planning, Monitoring, and Evaluation: Results of an
Evaluation. Briefing Paper No. 51. ISNAR.

Horton, D. 2002. Planning, Implementing and Evaluating Capacity Development. Briefing Paper
No. 50. ISNAR

Horton, D. (ed). 2001. Learning about Capacity Development through Evaluation Perspectives
and Observation from a Collaborative Network of National and International Organization and
Donor Agencies. The Hague: International Service for National Agricultural Research.

Horton, D., Mackay, R., Andersen, A., and Dupleich, L. 2000. Evaluating Capacity Development
in Planning, Monitoring, and Evaluation: A Case from Agricultural Research. Research Report
17. The Hague: International Service for National Agricultural Research.

INTRAC. 1998. The Newsletter of the International NGO Training and Research Center. No. 11.

Israel, B., Schultz, A., Parket, E., and Becker, A. 1998. Review of Community-Based Research:
Assessing Partnerships Approaches to Improve Public Health. Annual Review of Public Health
19:173-202.

Israel, B.A., Checkoway, B., Schulz, A., Zimmerman, M., 1994. Health Education and Commu-
nity Empowerment: Conceptualizing and Measuring Perceptions of Individual, Organizational,
and Community Control. Health Education Quarterly 21(2): 149-170.

James, R. 2001. Practical Guidelines for the Monitoring and Evaluation of Capacity Building
Experience from Africa. London: Intrac.

Kaul Shah, M., Degnan Kambou, S., Monahan, B. 1999. Embracing Participation in Develop-
ment: Wisdom from the Field. Atlanta: CARE Health and Population Unit.

Knight, R.J., Tsui, A.O. 1997. Family Planning Sustainability at the Outcome and Program Lev-
els: Constructing Indicators for USAID Strategic Planning. Chapel Hill: The EVALUATION
Project.

Kotellos, K.A., Amon J.J., Githens Benazerga, W.M. 1998. Field Experiences: Measuring Ca-
pacity Building Efforts in HIV/AIDS Prevention Programs. AIDSCAP Family Health Interna-
tional, AIDS 12 (suppl. 2): 109- S117.

LaFond, A. 1995. Sustaining Primary Health Care. London: Earthscan.

LaFond, A., Brown, L. and Macintyre, K. 2002. Mapping Capacity in the Health Sector: A Con-
ceptual Framework. International Journal of Health Planning and Management. 17:3-22.

96
Lake, S., Daura, M., and Mabanddhala, M., et al. 2000. Analyzing the Process of Health
Financing Reform in South Africa and Zambia. Zambia Country Reports. Major Applied Re-
search Technical Paper 1. Bethesda: Partnerships for Health Reform Project.

Lande, R.E. 2002. Performance Improvement. Population Reports, Series J, No. 52, Baltimore:
The Johns Hopkins Bloomberg School of Public Health, Population Information Program.

Luoma, M. 2000. PRIME’s Technical Report 19: Dominican Republic Performance Improve-
ment Project Evaluation. Chapel Hill: INTRAH.

Lusthaus, C., Adrien, M., Andersen, G., and Carden, F. 1999. Enhancing Organizational Per-
formance: A Toolbox for Self-Assessment. Ottawa: International Development Research Centre.

Mackay, R. and Horton, D. 2002. Capacity Development in Planning, Monitoring, and Evalua-
tion: Results of an Evaluation. Briefing Paper No. 51. ISNAR.

Management Sciences for Health. 1996. Planning for Sustainability: Assessing the Management
Capabilities of Your Organization. The Family Planning Manager. FPMD.

McCaffrey, J., Luoma, M., Newman, C., Rudy, S., Fort, A., Rosensweig, F. 1999. PI Stages,
Steps and Tools, Chapel Hill: INTRAH.

MEASURE Evaluation. 1998. The Needs Assessment Validation Study and 1998 Institutional
Capacity Assessment, PASCA Project. Chapel Hill: MEASURE Evaluation Project.

MEASURE Evaluation. 2001. Mapping Capacity in the Health Sector: Application of the
MEASURE Conceptual Framework in Measuring Capacity Building for a Complex Non-
governmental Organization. Draft.

Moore, M., Brown, L., and Honan, J. 2001. Toward a Public Value Framework for Accountabil-
ity and Performance Management for International Non-Governmental Organizations. Presented
at Hauser Center/Keio University Workshop on Accountability for International Non-
governmental Organizations, November 2 – 11, 2001.

Morgan, P. 1997. The Design and Use of Capacity Development Indicators. CIDA.

Murray, C.J.L., Frenk J. 1999. A WHO Framework for Health System Performance Assessment.
Geneva: World Health Organization.

Oakley, P. 2001. Evaluating Empowerment: Reviewing the Concept and Practice. INTRAC
NGO Management and Policy Series No.13. London: INTRAC.

Partnerships for Health Reform. 1997. Measuring Results of Health Sector Reform for System
Performance: A Handbook of Indicators. Bethesda: Partnerships for Health Reform.

Bibliography 97
Plummer, J. 1999. Municipalities & Community Participation: A Sourcebook for Capacity
Building. London and Sterling, VA: Earthscan Publications Ltd.

PRIME II. 2001. PRIME II Performance Monitoring Plan (PMP): General Guidelines for Moni-
toring and Reporting on the 10 Key PMP Indicators. Version 1.01. Chapel Hill: INTRAH.

Pyle, D. and LaFond, A. 2001. Case Example: Measuring Capacity Building in Training -
PRIME’s Evaluation, Documentation and Dissemination (EDD); draft.

Ross, J.A. and Mauldin, W.P. 1996. Family Planning Programs: Efforts and Results, 1972-1974.
Studies in Family Planning 27 (3):137-147.

Sarriot, E. 2002a. The Child Survival Sustainability Assessment (CSSA), For Shared
Sustainability Evaluation Methodology in Child Survival Interventions. Maryland: CORE –
CSTS.

Sarriot, Eric. 2002b. Sustaining Child Survival: Many Roads To Choose, but Do We
Have a Map? Maryland: ORC Macro.

Sullivan, T. and Bertrand, J. (eds). 2000. Monitoring Quality of Care in Family Planning by the
Quick Investigation of Quality (QIQ): Country Reports. Technical Report Series, No. 5. Chapel
Hill: MEASURE Evaluation.

UNICEF. 1999. M&E of Capacity Building: Guidance and Tools. Working Draft.

World Health Organization. 2000.World Health Report 2000: Health Systems: Improving Per-
formance. Geneva: World Health Organization.

98

You might also like