Guide To Monitoring and Evaluation Capacity-Building Interventions
Guide To Monitoring and Evaluation Capacity-Building Interventions
Guide To Monitoring and Evaluation Capacity-Building Interventions
March 2003
The manual series is made possible by support from USAID under the terms of Cooperative
Agreement HRN-A-00-97-00018-00. The opinions expressed are those of the authors, and do
not necessarily reflect the views of USAID.
NO. 2 Quick Investigation of Quality (QIQ): A User's Guide for Monitoring Qual-
ity of Care. February 2001.
NO. 3 Sampling Manual for Facility Surveys for Population, Maternal Health,
Child Health and STD Programs in Developing Countries. July 2001.
NO. 4 Measuring Maternal Mortality from a Census: Guidelines for Potential Us-
ers, July 2001.
Recommended Citation
LaFond, Anne and Brown, Lisanne. A Guide to Monitoring and Evaluation of Capacity-Building Inter-
ventions in the Health Sector in Developing Countries. MEASURE Evaluation Manual Series, No. 7.
Carolina Population Center, University of North Carolina at Chapel Hill. 2003.
Acknowledgements
We wish to acknowledge the contributions and support of a number of individuals and institu-
tions that enabled the successful completion of this document. Ray Kirkland and Krista Stewart
of USAID were instrumental in the conception of the Guide. Sara Pacque-Margolis of USAID
provided the support to see it through to completion. Our sincere gratitude also goes to several
technical reviewers for their constructive and instructive comments on earlier versions of the
Guide. They are: Alfredo Fort (PRIME II), Diane Catotti (IPAS), Alison Ellis (MSH), Leo Ryan
(CSTS/ORC Macro), Eric Sarriot (CSTS/ORC Macro), Fred Carden (IDRC), and Doug Horton
(ISNAR). Kate Macintyre contributed her ideas and encouragement, as well as provided the
SAIDIA case material. Catherine Elkins and Kate Macintyre contributed to the MEASURE
working paper on measuring capacity in the health sector, which provided a basis for this guide.
Thom Eisele and Cira Endley reviewed and analyzed capacity-measurement tools and practices.
Case examples of capacity measurement were developed with the cooperation of PRIME /
INTRAH; SAIDIA; NGO Networks for Health; and PATH (in a workshop setting). Finally, we
are grateful to the many adventurous organizations and individuals working to build capacity in
the health sector in developing countries. Their experimentation in capacity-building monitoring
and evaluation is commendable and deserves further study. This guide would not have been pos-
sible without the support of the Offices of Health and Population at the United States Agency for
International Development (Contract Number: HRN-A-00-97-00018- 00).
Acknowledgements i
Prologue
Capacity development1 has moved to center stage of the agendas of development organizations.
Substantial sums are being invested in capacity-building programs. Yet, their design and man-
agement leave much to be desired. Marred by untested, unrealistic assumptions, the results of
many programs fall short of their goals and expectations.
“Evaluations are needed to test the theories and assumptions on which capacity development
programs are based, to document their results, and to draw lessons for improving future pro-
grams. However, few capacity development programs have been systematically and thoroughly
evaluated” (Horton et al., 2000).
1
Capacity building and capacity development are used interchangeably throughout this document.
Prologue iii
List of Acronyms and Abbreviations
Acknowledgements.......................................................................................................................... i
Introduction..................................................................................................................................... 3
Defining Capacity-Building Monitoring and Evaluation ........................................................... 4
Capacity-Building M&E Has Many Roles ................................................................................. 5
Part 1. Concepts, Definitions, and Attributes of Capacity and Capacity Building ........................ 7
Why Build Capacity?.................................................................................................................. 7
What is Capacity Building? ........................................................................................................ 7
Useful Definitions ...................................................................................................................... 7
Attributes of Capacity and Capacity Building............................................................................ 7
Capacity Building Is Behavior Change ...................................................................................... 9
Why Monitor and Evaluate Capacity Building?....................................................................... 11
What Is Different about M&E of Capacity Building?.............................................................. 11
Implications for Capacity-Building M&E ................................................................................ 12
Summary for Managers and Evaluators.................................................................................... 12
Part 2. Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual
Framework ........................................................................................................................ 15
Overview Framework: The Role of Capacity in the Health Sector.......................................... 15
Capacity at a Single Level ........................................................................................................ 17
Defining Variables Related to Capacity and Performance ....................................................... 18
Using These Conceptual Frameworks ...................................................................................... 25
Summary for Managers and Evaluators.................................................................................... 26
Part 4. Summary Checklist: Steps for Designing a Capacity-Building M&E Plan ..................... 67
Checklist: Steps in Designing a Capacity-Building M&E Plan ............................................... 67
Annex A. Example of Scoring Used for Measuring Capacity Building in Training, PRIME I . 71
Glossary ........................................................................................................................................ 93
Bibliography ................................................................................................................................. 95
viii
Tables
Figures
Boxes
Box 10 Questions to Guide Discussion on the External Environment and Its Influence on
Organizational Capacity ............................................................................................ 42
Maps
x
About This Guide
This guide has grown out of the collective LaFond, and Macintyre, 2001); a review of
experience of health and development organi- capacity-building measurement tools and in-
zations working to build health sector capac- dicators; formal and informal consultations
ity in developing countries. The focus of the with practitioners; and an in-depth exploration
Guide is the measurement of capacity for the of four different capacity measurement expe-
purpose of monitoring and evaluating capac- riences (Box 1). The Guide also draws on
ity-building interventions. It responds to a lessons learned about capacity-building
demand among public health planners, evalu- monitoring and evaluation in other sectors,
ators, and practitioners for advice on assess- such as agriculture and housing, and on new
ing the many aspects of health programming evaluation approaches designed to support
that fall under the rubric of capacity building. learning in development programming (Hor-
ton et al., 2000; Morgan, 1997; Earl, Carden,
The purpose of this guide is to assist health and Smutylo, 2001).
planners and evaluators to
Many readers of this guide may not be aware From the discussion that follows on the con-
that there is a lack of widespread experience cept of capacity building and capacity meas-
in the field of capacity-building M&E in the urement techniques readers will come to un-
health sector. Capacity-building programs derstand why this guide is neither prescriptive
proliferate. Yet, methods for testing and nor exhaustive. Standardized approaches to
tracking their results are rare. We have there- monitoring and evaluating capacity-building
fore based the advice in this guide mainly on interventions are not found because of the
lessons learned from current practices in ca- wide variety of circumstances in which ca-
pacity assessment (see Table 1 for discussion pacity building takes place. Capacity building
of the differences between assessment and has been applied to actions as distinct as pol-
M&E). Sources include: a review of the state icy formulation, supplying basic health com-
of the art of capacity measurement (Brown, modities, and identifying danger signs of
2
Introduction
Over the last decade, capacity building has measurement tools applicable to every capac-
become as central to the business of develop- ity-building experience.
ing health systems in lesser-developed coun-
tries as providing financial resources and ap- Many of these challenges have also discour-
plying the latest science. Capacity is believed aged widespread testing of methods of capac-
to contribute directly to improving perform- ity-building monitoring and evaluation. The
ance in the health sector, and is thought to extent of experience is so limited that, at this
play an important role in sustaining adequate stage, capacity measurement is considered to
performance over time. Despite increased be an art rather than a science. Evaluators
attention to capacity, experience in gauging must therefore approach M&E of capacity-
the effectiveness of capacity-building inter- building interventions with a willingness to
ventions in the health sector is still limited. test strategies and share what they have
Unlike other aspects of health-related moni- learned in order to build a body of theory and
toring and evaluation (M&E), capacity meas- practice.
urement is not supported by a comprehensive
history of theory and practice. While methods Despite the conceptual and practical chal-
for monitoring and evaluating health service lenges of tackling capacity measurement,
coverage, access, and quality are well ad- there are a number of reasons to put energy
vanced, there are few tried and true ap- and time into developing a sound approach to
proaches for capturing “the interim state or monitoring and evaluation of capacity-
process that reflects the ability to achieve and building interventions. The most significant
sustain coverage, access, and quality over reason is that measurement is an important
time” (Brown, LaFond, and Macintyre, 2001). part of achieving capacity-building and per-
Thus, capacity measurement in the health formance goals. Monitoring and evaluation
sector is both new and experimental. can help health program professionals under-
stand the relationship between capacity-
There are intrinsic challenges to measuring building interventions, capacity and perform-
capacity that are reflected in the concept and ance, and to focus strategies used for im-
role of capacity itself. For example, capacity proving performance. Specifically, monitor-
derives its relevance from the contribution it ing and evaluation can help answer a range of
makes to performance. There are endless ar- questions about
eas where performance is required in the
health sector, and an equally wide range of ¨ the process of capacity change (how ca-
possible capacity variables that influence per- pacity building takes place),
formance. In addition, contextual factors (or ¨ capacity as an intermediate step toward
factors outside the control of most health performance (what elements of capacity
sector actors &) can have a strong influence are needed to ensure adequate perform-
on capacity or the desired outcome of capac- ance), and
ity-building intervention. These and other ¨ capacity as an outcome (whether capacity
characteristics of capacity and capacity building has improved capacity)
building explain why there are no gold stan-
dards for capacity-building M&E. There is no
short list of valid indicators of capacity in the
health sector, nor are there standardized
Introduction 3
Table 1: The Use of Assessment vs. M&E in Capacity-Building Intervention
Capacity Assessment Capacity Monitoring and Evaluation
q Purpose: diagnostic or descriptive; defines q Purpose: predictive; for accountability or
constraints comparisons; gauges results
q Measures gap between actual and desired q Measures results or progress toward de-
performance sired results
q Findings are used for internal purposes q Findings are used for internal and external
(design and planning) purposes (management; accountability)
q One time measurement q Often uses repeat measurement
q Action oriented q Action, analysis and accountability ori-
ented
q Looks broadly at existing situation q Uses conceptual frameworks to discern
relationships between variables
In this guide, when we talk about monitoring ance improvement as a reference for gauging
and evaluation of capacity building or progress. As such, it guides program man-
capacity development, we are mainly inter- agement as well as informs funding agencies
ested in the last question, that is, measuring about the results of capacity-building invest-
changes in capacity and linking them (directly ments. A final aspect of M&E (as opposed to
or indirectly) to capacity-building interven- diagnosis or assessment) is the use of con-
tions. ceptual frameworks that make assumptions
about the relationship between different vari-
ables that influence capacity and perform-
Defining Capacity-Building ance. Table 1 describes many of the differ-
Monitoring and Evaluation ences between capacity assessment and M&E.
4
conducted to gain understanding of the rela- process of building capacity itself. Program
tionship between capacity-building interven- managers often use capacity assessment tools
tions and capacity outcomes, or the links be- to raise awareness about capacity problems,
tween capacity and performance variables. stimulate commitment to improving capacity
The term “impact evaluation” & is not ap- among stakeholders, and for setting self-
propriate or useful in the context of capacity- determined benchmarks. The focus is internal.
building M&E because of the difficulty of In practice, capacity-building M&E is often
quantifying many elements of capacity and encouraged (or required) by external
attributing capacity change to any single in- stakeholders to be used mainly for account-
tervention or even a range of interventions. ability. Defining the purpose of M&E is
therefore not always easy for managers and
evaluators. The discussion that follows con-
Capacity-Building M&E Has Many siders the pros and cons of these various ap-
Roles proaches and informs critical measurement
choices. It begins with a discussion of the
A final introductory observation relates to the rationale for capacity-building M&E and ex-
role that measurement plays in a capacity- plores the concept of capacity and its role in
building intervention. Many experienced ca- improving performance.
pacity-building practitioners feel that capacity
measurement cannot be separated from the
Introduction 5
Part 1 Concepts, Definitions, and Attributes of Capacity and
Capacity Building
¨ Capacity building can be defined only through monitoring and evaluation. The dy-
in terms of a specific objective or goal. namic nature of capacity is often a reflection
In the health sector, capacity does not exist of the many different forces that influence its
for its own sake. Health planners and manag- development or decline.
ers are concerned with capacity because it
enables performance. For example, a health ¨ Capacity building is multidimensional.
facility that experiences regular stock-outs of Capacity building can be described in terms
pharmaceuticals might require additional ca- of levels. In the health sector, capacity is re-
pacity in financial planning or supplies man- quired at four different levels: health system,3
agement (i.e., interventions that are specific to organization, health personnel, and commu-
the particular performance goal of commodity nity. Yet, to date, most capacity-building ex-
supply). It follows that a capacity develop- perience and measurement have focused on
ment strategy for improving pharmaceutical organizational and health personnel capacity.
supply would call for a different approach In practice, capacity at one level is often in-
than one aimed at strengthening community fluenced by actions at other levels. A single
involvement in health. The link between ca- missing aspect of capacity rarely explains
pacity and performance, therefore, serves as performance failures. The PRIME project
the guide for both programming and evalua- (Box 2), for example, constructed an index of
tion of capacity-building interventions. Im- the capacity of training institutions that in-
proved performance, in turn, is a good indi- cluded 13 critical elements, ranging from po-
cator of success in capacity development. litical support for training in reproductive
health to community involvement in training
¨ Capacity (and capacity building) are (Fort, 1999).
dynamic and volatile.
Capacity can be perceived as a moving target. Analysis of capacity levels through measure-
At any given time, capacity can improve or ment encourages evaluators to think in terms
decline. It often develops in stages that indi- of complex, multifaceted systems. Connec-
cate improved readiness to influence perform- tions and forces within a system are critical to
ance (Goodman et al., 1998). Capacity build- 3
ing, therefore, is an ongoing process (the de- Some have labeled this level institutional develop-
ment (Kotellos, 1998; INTRAC, 1998), while others
velopment of abilities), whose stages can be use the terms organization and institution inter-
measured as “development outcomes” & changeably. To avoid confusion, we have adopted the
term system.
8
understanding constraints to capacity and how in health sector financing, and capacity
to overcome them. Paying too much attention gradually eroded to a very low level (LaFond,
to one part of the organization or system may 1995).
limit results at the overall organization or
system level and fail to improve overall per-
formance (Morgan, 1997). Take the example Capacity Building Is Behavior
of delivering immunization services at the Change
organization level. The effectiveness of this
service depends on elements that go beyond In addition to these attributes, current thinking
the capacity of the facility alone. The Cold about capacity building reflects two ways of
Chain & must function from the central level capturing the changes expected as a result of
to the facility to ensure vaccine viability. intervention. Traditional approaches to ca-
Civil service norms, regulations, and salary pacity building concentrate on the internal
levels can influence health worker motivation functioning of organizations and systems
and acceptance of the value of immunization (structures, strategies, staff, and skills).
among caregivers and encouragement from Morgan (1997), however, notes the necessity
community leaders can affect service utiliza- of considering the “macro” aspect of capacity
tion. If performance falters (i.e., coverage building that relates to the behavior and op-
declines), it may be the result of limited ca- erations of groups of organizations or indi-
pacity at the facility or other levels. An viduals and their role in wider systems (such
evaluation framework should consider all as the role of public sector health systems,
these variables, although it may focus meas- ministries of health, or district-level health
urement efforts on a smaller number of them. units in rural health improvement). In general,
there is more experience working on and
¨ Capacity depends on the context. measuring capacity at the micro level than at
Contextual factors or elements of the external the macro level.
environment influence capacity directly and
indirectly. Contextual influences include cul- Taking both a micro and macro look at ca-
tural, social, economic, political, legal, and pacity building suggests that capacity devel-
environmental variables. The influence of opment goes beyond a simple technical inter-
these factors may be crucial to the success of vention. It is to a great extent focused on in-
capacity building, yet they are often difficult ducing behavior change, a process that in-
to control or measure. For example, Sierra volves learning, moderating attitudes, and
Leone’s Ministry of Health (MOH) may have possibly adopting new values at individual,
the capacity to deliver childhood immuniza- organization, and system levels. Therefore,
tion services. However, frequent political in- the focus of capacity-building interventions
stability in the country can challenge that ca- and M&E must capture related conditions and
pacity and reduce performance (e.g., immuni- concepts such as motivation, culture, and
zation coverage) dramatically. Taking a more commitment, as well as changes in resource
general example, the stagnation and decline of availability, skill levels, and management
economic growth that occurred in Africa in structure (Morgan, 1997; James, 2001). Ex-
the 1980s severely undermined public sector amples of different types of organizational
capacity to meet recurrent costs for salaries capacities are found in Box 3.
and supply of basic health commodities. Even
well-established health systems, such as
Ghana’s, were unable to withstand the decline
Structural or technical
· The organization has effective program recruitment, development, and retention of staff
that it can perform its critical functions adequately. It must have a basic set of competen-
cies that can enable it to cope with its workload and environment.
· The organization has a structure, technology, and set of procedures that enable staff to
carry out the critical functions.
· The organization has the ability, resources, and autonomy to focus on a manageable set of
objectives over a reasonable period of time. Its goals are reasonably clear, accepted, and
achievable.
· The organization can alter its structure and functioning by including new actors, new part-
nerships, decentralization, delegation, creation of new organization, downsizing, privatiza-
tion, participation, devolution, and changing responsibilities for government.
Behavioral
· The organization understands the implications of its experiences and can change its collec-
tive behavior in line with this understanding. It can learn and adapt. It has a self-renewing
capacity.
· The organization can form productive relationships with outside groups and organizations as
part of a broader effort to achieve its objectives. It can manage these relationships for
both its own gain and that of its partners.
· The organization has the ability to legitimize its existence. It must be able to persuade key
external stakeholders of the value of supporting its continued functioning. It has an identity
that is accepted internally and externally, and the loyalty of its clients, customers, and
stakeholders gives it protection and resources.
· The organization has a culture, a set of values, and organizational motivation that values and
rewards performance.
· The work community has a population of groups and organizations that is sufficient to carry
out the tasks and services needed to implement such critical functions as analysis, produc-
tion, mediation, communication, networking, fund-raising, and so on.
10
Why Monitor and Evaluate Capacity based management of programs where capac-
Building? ity building is part of the overall strategy for
improving performance.
Given the nature of capacity development—
the volatility of capacity, its many levels, and
links to performance—some authors describe
What Is Different about M&E of
capacity building as a high-risk investment
(UNICEF, 1999). Yet, most development Capacity Building?
organizations agree that facilitating growth in Traditionally, monitoring and evaluation fo-
capacity among local partners’ systems, or- cuses more on measuring performance and
ganizations, and communities is key to the less on the way performance is achieved or
success of social development overall. As sustained. In contrast, capacity-building M&E
such, all stakeholders need dependable meth- focuses fundamentally on processes (e.g.,
ods for answering such questions as building alliances, mobilizing communities,
¨ What capacity exists now, and how does it decentralized planning, learning) and other
affect performance? qualitative aspects of individual or organiza-
¨ What improvements in capacity or new tional change (e.g., motivation to perform)
kinds of capacity are required? that contribute to better performance. Conse-
¨ Is capacity being built? Is the capacity- quently, M&E of capacity building often
building intervention focused on the right seeks to capture actions or results that often
elements? are not easily measured.
¨ What has been learned about capacity-
building strategies? That said, results of capacity building are as
¨ How does capacity contribute to important as processes. In capacity-building
sustainability? intervention, the process and result of capac-
ity building becomes the “intermediate out-
In addition, there is value in not restricting come” that is expected to lead eventually to
monitoring and evaluation of health and de- improved and sustained performance. Ex-
velopment interventions to a few important ploring the links between changes in capacity
outcomes or results (i.e., quality, coverage, and changes in performance is therefore key.
and health status). Organizations and systems However, it often involves considerable
produce many different and critical effects. speculation about the capacity needed to
For strategic purposes, and to manage change achieve those goals. One of the main gaps in
in programs, organizations and systems ef- the knowledge base that informs capacity
fectively, regular information on a number of measurement is the lack of common under-
operational indicators is required (Moore, standing of the relationship between capacity
Brown and Honan, 2001). A well-defined and performance. Little is known about what
monitoring and evaluation strategy will help elements or combinations of elements of ca-
make sense of these many facets of capacity pacity are critical to performance. Moreover,
and performance. Monitoring and evaluation there is considerable variation in what con-
should help local practitioners and their ex- stitutes “adequate” performance.
ternal partners to think strategically about
capacity development and to learn, through
practice, what works under different circum-
stances. At the same time, systematic meas-
urement of capacity contributes to results-
12
¨ Capacity building in the health sector can ganizations. Thus, capacity-building M&E
be described and measured in terms of must capture conditions and concepts such
four levels: health system, organization, as motivation, culture, and commitment,
health personnel, and community. Capac- as well as changes in resource availability,
ity at one level can be influenced by ac- skill levels, and management structure.
tions at other levels.
¨ Any strategy monitoring capacity should
¨ Contextual factors or elements of the ex- reflect a clear understanding of the inter-
ternal environment influence capacity di- action among different aspects of capacity
rectly and indirectly. and how they work (or fail to work) to-
gether.
¨ Capacity development goes beyond a
simple technical intervention, focusing on
behavior change in individuals and or-
The first step in developing a vision of capac- Overview Framework: The Role of
ity development, and a plan to measure it, is Capacity in the Health Sector
to understand the role capacity plays in the
health sector in developing countries. What Health system performance depends on ca-
are the expectations and assumptions sur- pacity. Figure 1 provides an overview of that
rounding capacity and its relationship to per- relationship and specifies four levels where
formance and health outcomes? Clear think- capacity is needed to ensure performance:
ing about these variables helps planners de- system, organization, health personnel, and
fine realistic objectives for capacity-building individual/community. The diagram suggests
interventions and express desired capacity that capacity contributes to performance at all
outcomes explicitly and precisely. Evaluators levels, and capacity at each level collectively
must rely on these parameters of capacity enables overall health system performance.
building in order to develop a capacity-
building M&E plan.
“Understanding capacity and
performance of individuals and
The following series of conceptual frame-
organizations demands careful
works are provided as a reference to help consideration of their role in
planners and evaluators develop their own larger systems, and their rela-
vision of the role capacity (and capacity tionships within those systems”
building) plays in the health sector. We have (Morgan, 1997).
found that directed discussion using these
types of frameworks prior to M&E planning
can stimulate strategic thinking within project Figure 1 also implies that capacity plays a
or work teams, clarify individual and collec- role in sustaining health system performance.
tive expectations and thereby improve capac- If health system performance remains ade-
ity-building M&E. Figure 1 – The Overview quate over time (supported by consistent ca-
– illustrates the critical role capacity plays in pacity), performance is said to be sustained.
influencing and sustaining performance in the Although few health systems in developing
health sector. It takes a system-wide view of countries can boast this accomplishment, the
capacity, including all possible levels where underlying aim of capacity development
capacity building might take place. The four should be a sustained change in resources or
other frameworks (Figures 2-5) take capacity behavior that leads to improved and sustained
at each level and break it down into defined performance. The goal is not short-term gain
components: inputs, processes, outputs, and but a lasting or robust change in ways of do-
outcomes (See Table 2). In breaking down ing business that becomes imbedded in the
capacity at each level, the frameworks pro- system or organization itself.
vide a starting point for identifying the key
variables that influence capacity and perform-
ance at that level.
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 15
Figure 1. Overview of Capacity in the Health Sector
External Environment
Improved
Health
Status
Sustained
Individual/Community Individual/Community Individual/Community
Capacity Behavior Change Behavior Change
External Environment
16
At the center of the framework is the ultimate Capacity at a Single Level
goal of capacity building in the health sector:
improved health status. Capacity does not The four levels of capacity are detailed further
directly influence health status but contributes in the following related frameworks (Figures
to it through its link to performance at system, 2-5).
organization and health personnel levels. In
this illustration, the health system interacts These conceptual frameworks take a broad
with individuals or groups of individuals (e.g., look at capacity at one level to illustrate many
the community) to influence health status. of the potential factors that might come to-
Individuals and communities contribute to gether to influence capacity and performance.
health system capacity by interacting with The purpose of these frameworks is to show
providers and organizations (receiving care, how capacity can be broken down at each
determining priorities, or providing resources) level into inputs, processes, outputs, and out-
and to health system performance by using comes in order to
health services. In addition, individuals and
communities can improve their health status ¨ identify the different factors that contrib-
independent of the health system by promot- ute to capacity, and performance
ing and adopting preventive measures, such as ¨ hypothesize about the potential relation-
regular hand washing, not smoking, or eating ships among these factors within a single
well. Improvements in individual and com- level
munity capacity should result in sustained
behavior change over time, representing this Conceptual frameworks like these differ from
level’s contribution to sustained health system logical or strategic frameworks in that they do
performance and improved health status. not reflect the linear logic of a particular ca-
pacity-building intervention, and its presumed
At the perimeter of Figure 1 we mark the in- effect on capacity outcomes. Rather, they
fluence of environmental or contextual fac- show the range of all possible variables that
tors, including cultural, social, economic, might influence capacity and performance. In
political, legal, and environmental variables this way they help planners at the early de-
that influence capacity and performance at all sign stages to determine the scope and focus
four levels (Africa Bureau, 1999; Horton, of a capacity-building intervention, and
2001; James 2001). The obvious importance evaluators to design valid measures for de-
of these factors for improving and sustaining termining the success of those interventions.
both capacity and performance suggests that Conceptual frameworks can become gradu-
special efforts are needed for tracking their ally more specific as decisions are made about
status overtime. In this guide, we focus capacity-building interventions and the ca-
mainly on variables that donors, governments, pacity and performance changes expected
private agencies, and individuals can influ- from them.
ence through health sector interventions.
However, we also encourage evaluators to
identify and monitor key contextual variables
and examine their relationship to program
outcomes.
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 17
Defining Variables Related to tors of performance at this level.5 The frame-
Capacity and Performance work includes a range of possible capacity
inputs, processes, outputs and outcomes that
Capacity inputs represent the resources (hu- contribute to performance at this level.
man, financial, material, etc.) that contribute
to capacity and performance. Processes repre- The system level is a complex area in which
sent the activities or behaviors at each capac- to define or address capacity development or
ity level that transform resources (inputs) into to assess changes in capacity resulting from
capacity outputs and outcomes. Capacity out- external or internal intervention. Despite the
puts and outcomes are the results of inputs use of an inputs-process-outputs-outcomes
and processes, and indicate products (outputs) framework, in practice, relationships among
and “an ability to carry out stated objectives” elements of capacity are not perfectly linear.
(outcomes). In many cases, capacity outcomes Change (or the lack of it) in capacity results
are expressed as knowledge, skills and be- from multiple influences, some of which can
havior. Performance is the expected result of be unexpected (Sarriot, 2002a). Contextual
capacity (a “stock of resources”) and the envi- factors such as political and economic stabil-
ronment, the final link in the hypothesized ity can also play a dominant yet poorly under-
chain of causality. Performance is defined as stood role in ensuring system capacity. Good
results that represent productivity or compe- examples come from health sector reform
tence related to an established objective, goal, activities that seek to improve national health
or standard. sector performance by changing sector priori-
ties, laws, organizational structures, and fi-
System Level nancing arrangements. For instance, the actual
Figure 2 refers to the health system. It in- results of legal reform in Zambia were
cludes the resources, actors, and institutions achieved but not well communicated to health
related to the financing, regulation, and provi- workers, which led to internal resistance to
sion of health actions (Murray and Frenk, “delinking” or separating health workers from
1999; WHO 2000).4 The system is seen as a the civil service (Lake et al., 2000). Despite
collection of institutions or organizations, addressing key constraints such as laws or
plus the personnel in those organizations regulations, capacity to manage human re-
working together to deliver health care and/or sources more effectively did not emerge as
promote better health. The health system per- planned.
forms certain functions independent of those
performed by the organizations, and person-
nel within it, and therefore possesses its own
capacity that can be assessed over time and
targeted for intervention.
18
Figure 2: Health System Capacity
External Environment
Inputs Process Outputs Capacity
Outcomes
Health policy making
Infrastructure
Effective health policies
Enforcement of health
Public/private composition
related laws and Published health policies
of services Accountability
regulations and regulations
(financial and program
Organizational structure transparency)
(public sector)
Health sector strategic Formal and informal P
planning coalitions e
Capacity to assess and
Existing health-related
Resource allocation Sector-wide strategy
cope with internal and r
laws, regulations, and external change
policies
f
Resource generation Increased local financing
Financial self-reliance o
of recurrent costs
Information/
Financial management r
communication Effective monitoring of
systems
Improved human resource
quality of care
m
Human resource availability in rural areas
development and
a
Human resources Responsiveness to client n
management Coordinated donor
needs and demands
Leadership
interventions c
Donor coordination
Efficient/appropriate e
Timely analysis and
Financial resources (public/ resource allocation
Multi-sectoral dissemination of national
private, internal/external)
collaboration health information
Use of information for
History and culture of the strategy and learning
Information coordination
system
& dissemination
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 19
Table 2: Capacity and Performance Variables Defined
Set of resources, including health personnel, financial resources, space, policy
orientation, and program service recipients, that are the raw materials that
Input
contribute to capacity at each level (system, organization, health personnel,
and individual/community).
Set of activities, practices, or functions by which the resources are used in
Process
pursuit of the expected results.
Set of results that represent capacity (an ability to carry out stated objectives),
Outcome
often expected to change as a direct result of capacity-building intervention.
20
Figure 3: Health Service and Civil Society Organization Capacity
External Environment
Capacity
Inputs Process Outputs
Outcomes
Strategic and operational Able to assess and cope with
plans internal and external
Strategic and operational change
Infrastructure planning Staff trained and supported
Responsiveness to client
Organizational Human resource management P
Functional management needs and demands
structure and development systems (i.e., supplies e
available, supervision done) Financial self-reliance
Mission Financial management r
Functional financial Stakeholder involvement f
Leadership Logistics/supplies management system (i.e., o
management resources available, costs Regular supply of essential
Financial
contained) commodities/No stock outs
r
resources Research and evaluation m
Functional health Acting and learning with a
Equipment and Coordination with other units information and information
Supplies
communication system n
Resource mobilization (information collected, Ability to monitor service c
Human resources
analyzed and used) quality and correct gaps as e
(technical & IEC needed
managerial)
Functional service delivery
Advocacy systems (i.e., services Able to develop and
History and
available) maintain working
culture of Community relations and relationships with other
organization mobilization Regular IEC and community organizations and groups
mobilization activities
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 21
Health Program Personnel Level Inputs such as sufficient funds, space and
Figure 4 presents the health program person- materials for professional development are
nel level. The term health personnel refers to transformed into capacity outcomes through
all those who perform clinical, managerial, processes such as educational and training
advocacy or other work within the health events or other opportunities for improving or
system. In contrast to the system and organi- maintaining health personnel capacity. Ca-
zation levels, comprehensive interventions to pacity outcomes relate to the knowledge,
build and maintain capacity are more com- skills, experience, and motivation resulting
mon at the health personnel level. Ideally, in from inputs and processes. Performance at
each health system there is a plan for pro- this level includes the application of knowl-
ducing and maintaining a cadre of qualified edge and skills in management, health serv-
personnel (personnel with capacity) and pro- ices delivery, training, and other related ac-
viding them with an adequately supportive tivities.
environment in which to perform effectively.
It is less common to find comprehensive or- Individual/Community Level
ganization- and system-level capacity- The final figure, Figure 5, represents the “de-
building plans, although one could argue they mand side” of the equation for capacity
are equally important. building as well as the role individuals and
communities play in shaping health systems
The vast majority of capacity-building inter- and improving health status. In addition to
ventions in the health sector focus on chang- system, organization, and health personnel
ing the skills and behavior of health personnel levels, capacity is required within individual
because managers and providers play a criti- clients and communities to ensure demand for
cal role in ensuring organization and system appropriate services to promote their role in
level capacity and performance. This frame- contributing to or influencing service deliv-
work attempts to tease out some of the key ery, and to encourage the practice of certain
variables at this level that relate directly to behaviors conducive to good health. For ex-
individual health personnel capacity, but we ample, clients’ capacity to demand improved
must acknowledge that organizational context or new services or to engage with health care
is equally important. Organizations and sys- personnel and organizations is vital to health
tems are often responsible for the inputs and system performance and achieving adequate
processes that enable health personnel to per- health status of the population.
form effectively. Thus, there is a significant
overlap between the inputs and processes that
contribute to capacity at the organization and
the health personnel levels. Many of the vari-
ables listed in system and organization level
frameworks also contribute to health person-
nel capacity.
22
Figure 4: Health Program Personnel Capacity
External Environment
Human resources
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 23
Figure 5: Individual/Community Capacity
External Environment
Inputs Process Outputs Capacity
Outcomes
Individual/family
Education
Income
Family history
Recognition of
Sex
symptoms and
Perceptions of need/risk
danger signs and
Willingness to seek care
actions needed
P
Ability to pay Needs identification and
problem solving
Recognition of e
need for services
Exposure to programs/services
Ability to articulate r
Collaboration needs and demands
Intention to use f
Past experiences with health
services and prevention
Achieving consensus services Knowledge of o
prevention behavior r
practices Participation in
Critical reflection
community health Community support m
Utilization-enhancing activities
(e.g., IEC, accessible services)
Securing resources committees for prevention a
behaviors n
Negotiation Community plans
Community dimensions c
•Community history Communication Community support
•Citizen participation for community- e
•Cohesiveness based health care
•Leadership
•Material and financial resources Community-based
(internal and external to community) mobilization and
•Social and interorganizational empowerment for
networks interacting with
•Communication channels health system
•Values
•Skills
External Environment
24
Here the individual/community level repre- nel, and organizations cannot function with-
sents all those who could benefit from and out health personnel. Without individual users
participate in the health care system; thus it of health services, the other levels cannot be-
includes all current and potential clients of the gin to perform effectively. Going beyond one-
services offered and the communities in dimensional diagrams to understand the dy-
which they live. The inclusion of individual namics of capacity building at each level and
and community capacity in this framework between levels will guide the development of
represents a departure from conventional M&E strategies and techniques.
thinking on capacity in the health sector. Ref-
erences to community capacity are found For example, the processes listed at the sys-
mostly in literature on community tem level in practice are often activities car-
empowerment and strategies for improving ried out by the MOH with support from do-
community mobilization and participation nors and in collaboration with other actors in
(Goodman et al. 1998; Israel et al, 1994; Is- the health sector (e.g., NGOs, private compa-
rael et al. 1998; and Eng and Parker, 1994). nies). There is a clear overlap between system
The inputs in this framework represent the and organizational capacity since the capacity
resources available to individuals and com- of the system to carry out certain functions
munities. They include individual/family may depend directly on the capacity of the
factors, community factors, and factors out- MOH to play its organizational role effec-
side the immediate influence of the commu- tively. An M&E plan should attempt to
nity, such as exposure to health and education monitor changes at both levels to explain ca-
programs. Processes explain how individuals pacity development (or lack of it) well.
and communities use their resources to act in
support of their own capacity development. The overview diagram that describes the rela-
Capacity outcomes relate to knowledge, mo- tionship between capacity, performance and
tivation, skills and behavior that support indi- sustainability also suggests a logical progres-
vidual and the community’s health and well- sion from capacity to performance to sus-
being. Performance is the actual behavior on tained performance, when in fact both capac-
the part of individuals or communities that ity and performance can improve or decline in
might include interaction with the health sys- uncoordinated or illogical ways. Because ca-
tem (participation or advocacy), as well as pacity is a fluid notion that responds to many
behavior that directly influences health out- influences, linear frameworks, often used in
comes: utilization of health services, self research and evaluation, are sometimes con-
treatment, compliance, prevention behavior. sidered too mechanical for monitoring and
evaluating capacity. Cause and effect chains
related to capacity are seldom linear, sug-
Using These Conceptual gesting the need to break out of a rigid, in-
Frameworks flexible way of thinking.
While it is useful to separate levels of capac- Figures 2 – 5 suggest one way to look beyond
ity for facilitating M&E planning, these levels the linear representation of capacity variables
are clearly interdependent, as shown in the by depicting the process of capacity develop-
nesting of health personnel and organization ment as a cycle. Once one stage of capacity
levels in the system level, and the arrows development is achieved, capacity outcomes
connecting individuals/communities to the become the new inputs and processes for the
health system and its parts. A health system is next stage of improvement. Indicators in this
made up of organizations and health person- sense become relative, in that an indicator of
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework 25
capacity expressed as an outcome might be ¨ The conceptual frameworks (Figures 1 –
described as another type of variable as ca- 5) illustrate the critical role capacity plays
pacity improves or declines. in influencing and sustaining performance
in the health sector, including the four
This guide recommends the development of levels where capacity is needed in the
conceptual frameworks as a useful process for health sector: system, organization, health
thinking through a capacity-building inter- personnel and individual/community.
vention strategy, clarifying expectations of
stakeholders and in hypothesizing the vari-
ables that are considered important to pro- ¨ Figures 2 – 5 depict capacity at each level.
gram results in a specific context. However, The purpose of these frameworks is to
these tools should be used along with strate- show how capacity can be broken down
gies such as creative thinking, revisiting as- into inputs, processes, outputs, and out-
sumptions, and reflecting on results with comes in order to identify the different
stakeholders when conducting capacity- factors that contribute to capacity and per-
building M&E. Part Three of the Guide will formance, and hypothesize about the po-
elaborate on the use of frameworks or maps in tential relationships among these factors
M&E and discuss these and other strategies within a single level.
for understanding changes in capacity and
their relationship to performance. ¨ The frameworks provide a starting point
for identifying the key variables that in-
fluence capacity and performance at that
level, and will help evaluators define ca-
Summary for Managers and
pacity variables to track in the M&E plan.
Evaluators
¨ The first step in developing a vision of
capacity development, and a plan to
measure it, is to understand the role ca-
pacity plays in the health sector in devel-
oping countries.
26
Part 3 Monitoring and Evaluating Capacity-Building Interventions
Part 2 described a generic conceptual frame- An M&E plan for capacity building states
work for understanding the role of capacity in what is to be evaluated, what evidence is
the health sector and suggested possible ca- needed to answer key evaluation questions,
pacity variables for each level. This part pres- how the data will be used, who will use the
ents the six steps for developing a monitoring data, and for what purpose. The intended re-
and evaluation plan for a specific capacity- sult of the planning steps is a clearly defined
building intervention. At the heart of this pro- guideline for data collection, analysis, and use
cess is the development of a “capacity map” for assessing the effectiveness of a capacity-
or conceptual framework that applies to the building intervention. In general, capacity-
particular capacity-building intervention un- building M&E plans contain the following:
der study. The six steps are listed in Box 4.
· a conceptual framework
Ideally an M&E plan should be formulated · a definition of essential variables of ca-
during the design and planning of a capacity- pacity and performance
building or performance improvement inter- · hypotheses on important links between
vention. Evaluators and program planners these capacity and performance variables
should work together with key stakeholders to · identification of the stages of capacity
conduct a needs assessment, define the inter- · indicators, and methods
vention strategy, and construct an M&E plan. · a timeframe, and
Since capacity building is often one strategy
· a dissemination strategy
in a broader approach to improving perform-
ance, capacity-building M&E should fit into
the overall performance-monitoring plan.
28
In practice, one finds an inherent tension in fully to ensure the best possible outcome. Too
defining the purpose of capacity-building much attention to serving external (often do-
M&E. Managers generally use capacity- nor) needs has been found to dilute the use of
building M&E results for two main reasons. M&E for improving capacity-building strate-
The first is primarily an internal function, that gies and organizational learning (Horton,
is, improving capacity and capacity-building 2001; Morgan, 1997). Lack of attention to
strategies. The second is primarily an external valid measures of change (or relying too
function, that is, reporting on the progress of a much on self-reported perceptions of capac-
capacity-building intervention to various fun- ity) can undermine the credibility of evalua-
ders and other external stakeholders. While tion results. Box 5 summarizes key advice on
the two purposes are not mutually exclusive, constructing a capacity-building M&E plan.
managers must guide the M&E process care-
DO
· Develop capacity-building M&E plan during the intervention design phase
· Involve all stakeholders, both internal and external, in developing the M&E plan, par-
ticularly the purpose of the evaluation
· Be prepared to negotiate with stakeholders on the purpose of the evaluation and make
all expectations transparent
DON’T
· Base M&E plans only on the needs of external stakeholders (mostly donors) at the ex-
pense of meeting internal information needs
· Miss opportunities to reflect and learn about capacity development through M&E
30
Box 6: Characteristics of a Good Performance Objective
· Measurable
· Reflects a needed change
· Relates to a clear product or action
· Relates to a defined target population
· Performed by specific delivery agent (e.g., organization, community group, etc.)
· Relevant to a particular context/situation
Examples
32
exists for the intervention, designers should els of performance.” Designers can refer to
review the assumptions and relationship guides on organizational capacity develop-
among variables depicted in this diagram to ment, for example, to help guide the choice of
understand the expected role of capacity capacity outcomes. However, capacity out-
building. If an overall conceptual framework comes should always be tailored to perform-
for the intervention does not already exist, it ance objectives or standards of the particular
is essential to construct one to support capac- intervention or organization under study.
ity mapping.
At the intervention design phase, it is worth
How to Map Capacity casting a wide net to consider all possible
The process of developing a capacity map is aspects of capacity that might relate to desired
outlined in Box 7. During this process, plan- performance. Brainstorming on capacity can
ners, evaluators, and key stakeholders might then lead stakeholders or participants in this
like to use the series of questions in Box 8 to mapping process to begin to prioritize areas
guide discussion. At a minimum, they should for capacity-building intervention. Where
consider the following two questions: parameters of an intervention are already set
or where a structure for brainstorming is
1. At which level is capacity required to en- needed, designers might choose two or three
sure the stated performance objectives? different areas of capacity development, ex-
In other words, what level is likely to be the press them as capacity outcomes, and then
main focus of capacity-building efforts? The map them. Although capacity building often
generic capacity map (Part 2, Figure 1) de- tries to address multiple capacity gaps simul-
fines four different levels where capacity is taneously, for measurement purposes, it is
needed in the health sector: system, organiza- advisable to choose a limited number of key
tion, health personnel, and individ- capacity outcomes for capacity mapping.
ual/community level. Careful definition of the
performance objectives in Step 2, and a clear For example, in Maps 1-3 below, the per-
understanding of the capacity-building strat- formance objective for the (fictitious) Family
egy should help evaluators answer these Health Organization is defined as “consistent
questions. For example, if performance gaps delivery of a package of essential, good-
are found in a specific health facility, then it quality family planning services to a defined
is likely that capacity-building interventions population.” Performance variables might
will seek to improve capacity outcomes at the include coverage, quality, and consistency,
organization or individual level. The first map which would be expressed as indicators. The
would focus on one of those levels. three key capacity outcomes for this specific
performance objective are defined as financial
2. What capacity outcomes are expected at self-reliance, quality assurance practices in-
that level to improve performance? stitutionalized, and health services able to
Once the level has been specified, designers respond to client needs. Although many other
should identify aspects of capacity that might aspects of capacity might influence coverage,
influence the specific performance objective quality and consistency in the delivery of
at that level and express them as capacity out- family planning services, this organization
comes. Morgan (1997) defines capacity out- has chosen to concentrate on these three ar-
comes as the “product of new learning and eas.
abilities that eventually become part of the
organization or system, and support new lev-
34
Box 9: Guidance on Capacity Mapping
· Capacity mapping should refer to the logic of the overall program, project or intervention. Hor-
ton et al. describe this approach as “referring to a theory of action” & that binds interested par-
ties into a single vision (Horton, 2001). Whether mapping capacity during intervention design or in
the context of an already defined intervention strategy, it is advisable to refer to existing data on
the intervention area, including needs assessment, capacity assessments, etc.
· When mapping capacity it may be helpful to refer to the conceptual framework in Part 2 for a
general review of the role capacity plays in improving performance in the health sector and exam-
ples of capacity variables.
· Be realistic about your expectations of the role of capacity. There is a tendency to consider
every aspect of resources and behavior in an individual, organization, or system as a capacity vari-
able, and to risk measuring too much.
· Look beyond individual capacity and training solutions to identify capacity variables. For exam-
ple, during discussions on the capacity framework with SAIDIA, a Kenyan NGO (nongovernmental
organization) that provides health services and community development opportunities, staff at first
claimed that training health workers and community members was their only work in capacity build-
ing. Yet, with further discussion, participants illustrated a wide range of capacity-building activities
at all levels, including their work in coordination and collaboration with the public sector, and court-
ing relations with donors that fund the NGO.
· Map capacity with a wide range of stakeholders to inspire a sense of ownership of capacity
building and appreciation of the use of evaluation in programming. Since capacity-building M&E
delves into many internal characteristics and processes found within systems, organizations, and
communities, it requires considerable investment on the part of the members of these groups to
achieve success. The quality of information obtained from evaluation, therefore, is directly af-
fected by the extent to which participants develop a feeling of ownership of the M&E activity and
value the data being collected.
To build such a capacity map, planners and The following three diagrams (Maps 1, 2, and
evaluators can use a facilitated discussion 3) provide examples of capacity maps that
among stakeholders as well as tap existing define in a very general sense some possible
data from needs assessments, capacity diag- inputs, processes, and outputs related to the
noses and prior monitoring. Evaluators might three particular organizational capacity out-
also draw on the experience of system and comes for the hypothetical Family Health
organizational theory, theories of adult learn- Organization: financial self-reliance, quality
ing, and community development to hypothe- assurance practices institutionalized, and
size the most likely causes of poor perform- health services able to respond to client needs
ance. Box 9 provides some general guidance and demands.
for capacity mapping.
7
This matrix is adapted from an exercise completed by
participants at a Workshop on Sustainability and Ca-
pacity Building hosted by PLAN International in May
2001 in Dakar, Senegal.
36
Map 1: Organizational Capacity Map - Single Level
Capacity outcome: Financial self-sufficiency
Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve financial self-reliance of health facilities in District One.
Strategies and activities: Improve leadership and financial planning skills of district managers; introduce new procedures for strategic planning; develop links
between health facilities and communities leading to joint planning and management; develop skills in grant application writing and reporting to funders.
Capacity Performance
Inputs Processes Outputs
Outcome Objective
Leadership Strategic & operational Strategic & operational plans Financial self- Consistent delivery of
planning developed and implemented reliance (ability to essential package of good-
Finances
generate resources quality family planning
Financial management Staff trained
Infrastructure & maintain a services to a defined
Resource mobilization Functioning financial healthy funding population (coverage,
Human resources
management system base) quality, and consistency)
Human resource management
Finance policy
& development External linkages established
Organizational culture (to donors, partners,
Research, monitoring &
individuals, community)
evaluation
Coordination with other
internal units
Creation & maintenance of
linkages with external groups
(specifically, funders)
Advocacy
Managing quality of care
Community mobilization
Context or operational environment
National policy on fee-for-service
National financial management procedures
38
Map 3: Organizational Capacity Map - Single Level
Capacity outcome: Health services able to respond to client needs and demands
Intervention
Performance Objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality and consistency).
Capacity-building objective: Improve the ability of the health services to respond to client needs in District One.
Strategies and activities: Introduce incentives for quality of care practices; improve client provider communication skills; research and design optimal mecha-
nisms for communication and interaction between communities and health facilities.
Organizational Health center personnel Designing & planning Training plan developed Successful organization &
(Local NGO) (quantity/basic training) a training program execution of training of trainers
Training materials developed
Community health worker Supervision and Ability to recognize training
(quantity) mentoring of CHWs needs and meeting them
Personnel Curricula for: Participation in Trainers meet standards Capacity of CHWs to deliver IEC Effective delivery of
Training of Trainers & for Training of Trainers following course on immunization: IEC services (Quality
Community Health of IEC sessions)
Workers Participation in CHW CHWs meet standards - CHWs skilled & motivated to
training on IEC following course provide services
Community Exposure to immunization Community meetings Level of participation in Community knowledge of Improved demand for
program with CHWs health care learning activities immunization benefits and side immunization in
effects communities served
Recognition of need for by CHWs
immunization Caregivers value immunization (coverage)
Context or operational environment
National economic growth
National health expenditures on immunization
Donor support for immunization
40
Interpreting and Using Capacity Maps and performance. It should present capacity
The examples of capacity maps above illus- variables in a general way. Planners and
trate how the different factors of capacity evaluators then can discuss these variables
work together to drive or influence perform- and narrow them down to priority areas of
ance. They enable designers to view these intervention or measurement, and describe
elements in a more systematic way that pro- them more specifically. The second or third
motes common understanding and evaluation. iteration of a map should be more precise in
When capacity mapping is conducted after an depicting the variables to be monitored over
intervention has been planned, it can be used the course of the intervention. Map 5a pro-
to help evaluators understand the intentions of vides an example of the first iteration of mul-
managers in terms of their strategy for capac- tiple-level capacity mapping. It contains a
ity development. During mapping, managers wide range of general categories. Map 5b
are encouraged to pinpoint and define clearly illustrates the second iteration in which vari-
the areas of potential change that will serve as ables are specified in greater detail.
indicators of progress in capacity develop-
ment. Used after the design phase, the map- Through mapping, evaluators can identify and
ping exercise can reinforce existing capacity- organize the key questions to be addressed
development strategies, thereby increasing regarding expected changes in the quantity,
their specificity. Sometimes mapping can also quality, cost, and other key aspects of capac-
prompt planners to reexamine strategic ity which require monitoring over time. As
choices and change their tactics. Indeed, this planners and evaluators interpret the map,
use of capacity mapping for strategic plan- they will narrow down the focus of monitor-
ning, and the linking of M&E with program ing and evaluation activities. In Step 4, be-
strategy should be encouraged throughout the low, evaluators define indicators that measure
course of the capacity develop- these variables and build them into a moni-
ment/performance improvement intervention. toring and evaluation plan.
Administrative
· Is your organization influenced by the rule of other organizations, institutions, and groups to
which it is related or might be expected to be related?
· Is your organization influenced by expectations of consumers, policymakers, suppliers, com-
petitors, and other organizations in its external environment?
· Are your organization’s objectives and activities influenced by governments, donors, and
other organizations?
· Is your organization influenced by important sector rules and regulations?
· Do administrative norms/values in your country support or hinder the work your organization
intends to carry out?
Legal
· Do the laws of the country support the role played by your organization?
· Does the legal framework support the organization’s autonomy?
· Is the legal framework clear?
· Is the legal framework consistent with current practice?
· Is the legal regulatory context conducive to your organization’s work?
· Does your organization monitor changes in the legal context that could affect the position of
the organization?
Political environment issues
· Do the political and ideological trends of the government support the kind of work the or-
ganization does?
· Does the government system facilitate collaborative arrangements?
· Does the organization play a role in national or sector development?
· Does the organization have access to government funding?
· Does the organization have access to international funding?
· Does the organization have access to the government’s knowledge and publications?
· Do government policies and programs support the organization?
Sociocultural environment
· Is equity in the workplace a social value?
· Does the organization account for the effect of culture on program complexity?
· Do values found in the sociocultural environment support the work of the organization?
· Does the organization have access to a pool of capable human resources to recruit staff?
· Does the organization analyze and link demographic trends to its work?
Economic environment
· Does the government’s economic policy support the organization’s ability to acquire technolo-
gies and financial resources?
· Is money available to do the organization’s work?
· Do donors support the organization?
Technological environment
· Is adequate physical infrastructure (telecommunication, transport) in place to support the
organization’s work?
· Is the technology needed for your work supported by the overall level of national technology
development?
42
· Does the government system facilitate the organization’s process for acquiring needed tech-
nology?
· Is the level of human resource development in your organization adequate to support new
technology?
Stakeholder environment
· Is the community involved in the organization?
· Are partners involved in the organization?
· Do governments value the organization’s products and services?
· Do governments request or use the organization’s products and services?
· Do similar organizations compete or cooperate with your organization?
· Do donors influence the organization?
· Do funders support the organization?
The questions above are adapted from Enhancing Organizational Performance (Lusthaus et al.,
1999). While they are focused on the organization level, many of them can be adapted for any
level of the health system.
44
should also adapt these indicators to a par- Evaluators also need to take into account the
ticular organization’s baseline assessment of availability of data for “operationalizing”
capacity and its particular product or service. indicators and the potential costs of gathering
Expectations for improved performance and data, in terms of financial resources and time.
the timeframe of a specific capacity-building
intervention also matter. An organization pur- Table 4 provides examples of health-sector
suing capacity improvement in reproductive capacity indicators by level (system, organi-
health service delivery would choose different zation, health personnel, and individ-
measures of change from one seeking capac- ual/community) and measurement variable
ity improvement in networking and partner- (input, process, output, and outcome) taken
ing. Thus, at the outset of M&E planning, one from various sources (Morgan, 1997; Horton
should begin defining indicators based on the et al, 2000; Bertrand and Escudero, 2002;
capacity variables identified in mapping Brown, LaFond, and Macintyre, 2001). It
rather than selecting indicators from a generic suggests wide variation in the indicators cur-
list. Map 6 illustrates how indicators can be rently used to measure capacity and the need
added for each capacity variable, using the for both quantitative and qualitative data
format from Map 3. The discussion on indi- sources. The table is not intended to represent
cators below begins with general guidance on relationships among these specific indicators.
indicator design, provides examples of capac- Box 11 provides examples of capacity indi-
ity indicators, and concludes with lessons cators used in non-health sector programs.
learned from a variety of capacity develop- Table 5 gives examples of performance indi-
ment experiences (in health and other sec- cators at each level for reference.
tors).
Lessons for Indicator Development
Working with Capacity Indicators The following lessons on indicator develop-
By now most program managers and evaluat- ment are drawn from field experience in ca-
ors at least have heard about what makes a pacity measurement in health and other sec-
good indicator. In general, all indicators tors (Morgan, 1997; Horton et al. 2000; Fort,
should share the following traits: 1999; Luoma, 2000; Ampomah, 2000; Ca-
· Validity: Validity refers to whether the totti, 1999; Pyle and LaFond, 2001).
indicator is measuring what it is supposed to
measure. Indicators should have a close con-
nection with the intervention. Lesson 1: Indicators should reflect an un-
· Reliability: Reliability refers to the de- derstanding of the change strategy for ca-
gree of random measurement error in an indi- pacity development.
cator. Error may result from sampling or non-
sampling; whether the response is inherently The process of choosing capacity indicators
objective or subjective. should feed into the overall change strategy
· Well-defined: Indicator definitions designed for building capacity and improving
should use clear and precise terms so every- performance. Indicators should be developed
one involved can understand what is being alongside capacity mapping while designing a
measured. capacity-building intervention. Evaluators
· Sensitivity: A sound indicator is sensitive also might seek to understand how informa-
to the changes in program elements being tion is currently used in the organization or
assessed. system to ensure that indicators become in-
centives for change and not barriers.
Example 1
1. Capacity indicator related to decentralized payment functions administered by local officials,
district assembly members, and financial and political employees:
Ability of the system to transfer funds between authority levels (for example, within 45 days
of the end of the quarter) and/or produce audited statements within six months of the end of
the fiscal year.
2. Capacity indicator related to community water management committee’s role in water pump
maintenance:
A functioning Pump Management Committee that meets at least once a month and keeps the
pump functioning 90 percent of the time in normal circumstances.
3. Capacity indicator related to coordination of information among six ministries working on soil
erosion:
Twenty-five percent increase in the number of projects that require contributions from two or
more departments.
4. Capacity indicator related to government department to carry out joint surveys of client
farmers in delta area of cotton region:
Acceptance of survey methods as an effective tool by senior research officers and their incor-
poration into the work program of the agencies.
Example 2
Indicators related to motivation
Motivation to implement the strategic approach
Motivation to undertake strategic planning
Interest in improving the management information system
Interest in designing and managing competitive projects
Indicators related to capacity
Knowledge of the strategic approach
Skills to undertake strategic planning
Knowledge about designing and managing competitive projects
Knowledge about the foundations of an information management system
Indicators related to context or environment
Degree to which tasks demand conceptual and methodological creativity and innovation
Positive appreciation of performance in institutional evaluations
Degree of autonomy to undertake work
Contribution to improvement of the management information system
46
Lesson 2: Capacity indicators should cap- Lesson 4: Indicators should encourage
ture organizational and behavioral change ownership and appreciation of the capac-
as well as material and technical change. ity-building and M&E process.
Learning
Provider-client interaction
Community Experience with family planning Links to community Number of contacts Outcome of contacts
Leadership
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care
48
Map 5b: Mapping Capacity Second Iteration
Capacity outcome: Health services able to respond to client needs and demands
Intervention
Performance objective: Consistent delivery of a package of family planning services to a defined population (coverage, quality, and consistency).
Capacity-building objective: Improve ability of health services to respond to client needs and demands in health facilities in District One.
Strategies and activities: Introduce incentives for quality of care practices; improve client provider communication skills; research and design optimal mecha-
nisms for communication and interaction between communities and health facilities.
Inputs Processes Outputs Outcomes Performance
Civil service administration prac-
System tices that support counseling and
provision of family planning
Leadership
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care
Level Capacity Inputs Capacity Processes Capacity Outputs Capacity Outcomes Performance
National policy on
System immunization & CHWs
(Policy exists & is favorable)
Organization Health center personnel Designing & planning a Training plan developed Successful organization &
(Local NGO) (Quantity/ basic training) training program (Plan exists) execution of Training of
(Planning mechanisms Training materials developed Trainers (TOT completed;
Community health workers exist & planning skills (Quantity/quality of materials) trainees’ knowledge improves;
(Quantity/ basic training) demonstrated) trainees satisfied)
Personnel Curricula for: Participation in Training Trainers meet standards following course Capacity of CHWs to delivery Effective delivery of IEC
- Training of Trainers of Trainers (Post-test scores) IEC on immunization services (Quality of IEC
and sessions)
- Community Health Participation in CHW CHWs meet standards following course - CHWs motivated to provide
Workers training on IEC (Post-test scores) services (attitudes of CHWs to
(curriculum exists) (% of personnel or IEC)
CHWs completing IEC sessions provided
training) (Number/frequency of IEC sessions)
Community Exposure to immunization Perceptions of CHWs Community knowledge of Improved demand for
program (Past experience (Community relationship with CHWs and immunization benefits and side immunization services in
with childhood acceptability of their role) effects (Index of immunization communities serviced by
immunization) program message recall) CHWs (Immunization
service utilization &
coverage)
50
Lesson 5: The results of indicator-based indicators based on program objectives and
capacity-building M&E should be inter- develop a manageable set to monitor over
preted wisely. time.
There are documented challenges to using Evaluators are experimenting with indices or
indicators to monitor and evaluate capacity complex indicators that combine a short list
building. Evaluators can manage each chal- of essential indicators (sometimes weighted
lenge with careful planning of M&E. Some of by strength of influence) into a single measure
these challenges are detailed below. of capacity. Of the few examples in the health
sector, the PRIME project used a single index
Capacity development is context specific. It to assess capacity dimensions of organizations
reflects qualitative (as well as quantitative) that conduct training in reproductive health
changes in resource availability and behavior. (Fort, 1999; Ampomah, 2000; Catotti, 1999).
Given the wide range of possible scenarios This index also takes into account different
and capacity/performance objectives, it is possible stages of capacity by using a scale
often not possible to establish objective stan- from 0 to 4 to assess progress of an organiza-
dards that would allow local or regional com- tion for each indicator under study. An exam-
parisons in capacity across similar types of ple of the indicators and scales used in the
entities. Internal benchmarks can be set, but Training Organizations Index and a presenta-
they may not be valid for other entities or tion of the results of a capacity assessment in
contexts. It follows that aggregation of indi- El Salvador are found in Annexes A and B.
cators on a district, regional, or national scale The PRIME Project did not use this index to
is not likely to result in useful information for conduct routine monitoring and evaluation of
M&E. training organizations; however, it has
adapted many of these indicators and the
Selection of capacity indicators is often scaling approach for use in its performance
highly subjective. To encourage ownership monitoring plan (PRIME II, 2001). Other
and relevance, evaluators often rely on per- examples that use scales or scoring as part of
ceptions of capacity and capacity change a capacity index can be found in the Man-
among participants in the capacity develop- agement and Organizational Sustainability
ment process as the basis for measuring prog- Tool (MOST) (MSH, 1996), and tools devel-
ress. Thus, there is a need to balance these oped to evaluate the capacity of agricultural
subjective measures with a range of objective research organizations (Horton et al., 2000).8
indicators and data-gathering strategies.
52
Table 5: Examples of Performance Indicators in Current Use in Health Programs
Level
Health system · Average time/distance to the nearest reproductive health facility offering a specific service
· Percent of facilities where percent of clients receive the service that meets the expected standards
· Number/percent of trainees deployed to an appropriate service delivery point and job assignment
· Percent of facilities that experience a stockout at any point during a given time period
· Percent of health facilities providing STI services with adequate drug supply
· Contraceptive prevalence rate (CPR)
· Disability adjusted life years (DALY)
· Disability adjusted life expectancy (DALE)
· System responsiveness to clients
· Index of equality of child survival
· Total health expenditure as a percent of GDP
· Public expenditure on health as a percent of total public expenditure
· Out of pocket expenditure as a percent of total health expenditure
Organization · Percent of mothers examined every 30 minutes during the first two hours after delivery
· Percent of data elements reported accurately in MIS reports
· Family planning continuation rates in catchment population
· Percent of annual revenue generated from diverse sources
· Percent of target population that received DPT 3 immunization
· Cost of one month’s supply of contraceptives as a percent of monthly wages
Health Personnel · Percent of deliveries in which a partograph is correctly used
· Percent of newborns receiving immediate care according to MOH guidelines
· Percent of pregnant women counseled and tested for HIV
· Percent of STI patients appropriately diagnosed and treated
Individual/ · Percent of communities with active health center management committee
Community · Percent of target population that received DPT 3 immunization
· Percent of non-users who intend to adopt a certain practice in the future
· Percent of infants 0 - < 6 months of age who are exclusively breastfed
· Percent using condoms at last higher-risk sex
54
STEP 5 Identify Appropriate ity-building intervention itself (e.g. contextual
Methodological influence). Since capacity measures are not
easily quantified, and identifying similar or-
Approach and Sources
ganizations or systems to facilitate compari-
of Data son (as in a case-control study) is difficult,
The fifth step in developing a capacity- experimental designs are not feasible or prac-
building M&E plan involves defining the tical for capacity measurement. As James
methodological approach, identifying sources (2001) notes about capacity-building evalua-
of data, and choosing (or developing) data tion, “precise measurement and attribution of
collection tools. Evaluators should ask the cause and effect is rarely possible and never
following questions: cost effective. The best we can hope for is
¨ Which methodological approach is appro- plausible association.”
priate?
¨ What sources of data are necessary for Evaluators are therefore advised to recognize
measuring the indicators defined in Step the challenges to capacity-building M&E and
4? set realistic aims for evaluation. Many of
¨ Are there any existing tools for measuring these challenges have been discussed previ-
capacity that are appropriate for my pur- ously in this guide. Some of them relate to the
poses? inherent nature of capacity (capacity and ca-
pacity building are dynamic and multidimen-
Methodological Approaches and Challenges sional; contextual), while others are a function
As discussed throughout this guide, monitor- of the early stage of development of capacity
ing and evaluation require different meth- measurement. Four of the main challenges are
odological approaches and have different data detailed below.
needs. The choice of methods and data
sources relates mainly to the purpose of the Capacity develops in stages
evaluation (see Step 1). Capacity measurement tools should be able to
¨ Is the purpose to monitor the implementa- capture different stages of development of
tion of a capacity-building intervention, communities, health personnel, organizations,
assess its effectiveness, or both? or health systems. The “MSH organizational
¨ Will the results be used mainly for inter- profile” used in the Management and Organ-
nal improvements or external reporting? izational Sustainability Tool (MOST), for
example, has identified different benchmarks
Clearly, all capacity-building programs need according to an organization’s stage of devel-
to be monitored to ensure they are working opment (nascent, emerging, mature). Capac-
well (i.e. to track changes in inputs, processes, ity measurement must be able to capture indi-
outputs and outcomes). However, the evalua- vidual elements of capacity and combinations
tion of program effectiveness happens less of elements, and relate them to the stage of
frequently and only for selected interventions development of the entity being assessed.
due to cost and complexity. In the case of
capacity-building evaluation, it can be par- Changes in capacity need to be measured
ticularly difficult to conduct evaluations that over time
look for an association between capacity- Repeated measures are needed to capture the
building intervention and changes in capacity interim steps in capacity-building processes as
or performance. These changes can occur for well as trends in outcomes. While there are
a number of reasons in addition to the capac- examples of repeated application of capacity
56
· combine qualitative and quantitative Sources of Data
methods, such as focus groups, individual A number of data sources are available for
interviews (with both closed- and open- monitoring and evaluating capacity building.
ended questions), surveys, and document Since capacity measurement often includes
reviews. the use of multiple indicators, monitoring and
evaluation usually requires multiple data
· address more than one level. Capacity sources. Indicator design should take into
often occurs at several levels simultane- account the potential availability of data par-
ously. New measurement tools are needed ticularly from existing sources. Organizations
to capture capacity building at a single and systems often have records and reports
level and address the relationship between that provide insights into different aspects of
levels. capacity. Some examples of existing data
sources are presented below.
· include self-assessment techniques in
combination with external or standardized In many cases, however, it will be necessary
methods. (See Box 13 for a discussion of to collect new data to operationalize the indi-
self-assessment and external assessment.) cators selected. As noted above, issues such
Evaluators are urged to strike a balance as data sensitivity (with respect to its effect on
between meeting the need for evaluation validity), the purpose of monitoring and
data that different stakeholders will deem evaluation, and the cost in terms of time and
“objective” or credible, and promoting resources required should guide evaluators in
performance improvement through moni- determining what data will be collected and
toring and evaluation. how they will be collected.
· triangulate methods and data sources. Sources of data by level of capacity in-
Triangulation examines results from a va- clude:
riety of data-collection instruments and
sources, strengthening the findings of ca- System: national health policy records, na-
pacity-building monitoring and evalua- tional data-collection efforts (census, vital
tion. If all data lead to the same conclu- statistics, national /regional surveys), interna-
sion, then there is some confidence the re- tional surveys (e.g., FPPE, API, DHS).9 MOH
sult actually will reflect changes. Where policies, financial reports, legal or regulatory
there is discordance in the results, it is statements (bills, acts, recommendations,
necessary to examine possible sources of white papers, etc.).
the differences. Looking at other sources
of data on similar topics can help under- Organization: routine health service records
stand findings as well. and reports, budget and expenditure records,
financial statements, personnel records, pro-
· use data interpretation workshops to ob- gram and donor reports, constitutional docu-
tain input from a range of stakeholders in- mentation, strategic and annual plans, meeting
volved in the program (both internal and minutes, evaluations and audits, organiza-
external). tional networking analysis, organizational
assessments.
9
FPPE (Family Planning Program Effort Score); API
(AIDS Program Effort Index); DHS (Demographic and
Health Survey).
58
Box 13: Advantages and Disadvantages of Self-Assessment and
External Assessment Techniques
While practitioners value the role of self-assessment tools in stimulating interest in capacity
building and launching a change process, for monitoring and evaluation purposes it is important
to consider the potential advantages and disadvantages of both internal and external ap-
proaches.
Recommendation:
· Use a mixture of methods that combine subjective and objective measurement.
60
Table 7: Capacity Measurement Tools
Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools
Enhancing Organizational IDRC Organization Qualitative and External and Multiple Measures the results of an organization’s programs, prod-
Performance: A Toolbox for quantitative self-assessment ucts and services and then integrates these results with the
Self Assessment techniques of formative assessment in which the assessment
http://www.idrc.ca team becomes involved in helping the organization meet its
goals.
Outcome Mapping: A Method IDRC System Qualitative and Self-assessment Multiple
Outcome Mapping characterizes and assesses the contribu-
for Reporting on Results Organization quantitative
tions a project or organization makes to significant and
http://www.idrc.ca/telecentre/
lasting changes (outcomes). In Outcome Mapping a pro-
evaluation/html/29_Out.html
gram is assessed against its activities that contribute to a
desired outcome, not against the outcome itself.
Integrated Health Facility BASICS Organization Quantitative External as- Multiple This manual outlines the key steps for planning and con-
Assessment (IHFA) assessment sessment ducting an integrated health facility assessment at outpatient
http://www.basics.org/publica health facilities in developing countries. This assessment is
tions/pubs/hfa/hfa_toc.htm designed for use by primary health care programs that are
planning to integrate child health care services.
Management and Organiza- Family Planning Organization Qualitative Self-assessment Single The Management and Organizational Sustainability Tool
tional Sustainability Tool Management (MOST) is a package (instrument and user's guide) designed
(MOST) Development to facilitate management self-assessment and to support
http://erc.msh.org/mainpage.c (FPMD)/ management improvement. MOST uses an instrument to
fm?file=95.40.htm&module=t MSH help focus an organization on the actual characteristics of
oolkit&language=English their management, identify directions and strategies for
improvement, and set priorities for the management devel-
opment effort.
Management Development FPMD/MSH Organization Quantitative Self-assessment Single This tool includes four steps: 1) develop a preliminary
Assessment (MDA) management map to guide assessment; 2) develop and
http://erc.msh.org/mainpage.c administer MDA questionnaire to collect information on the
fm?file=95.50.htm&module=t management capabilities of organization; 3) analyze survey
oolkit&language=English results and develop a post-survey management map; and 4)
develop action plan for making improvements.
The Child Survival Child Survival System Qualitative and Self and internal Multiple Evaluation framework to systematically measure progress
Sustainability Assessment Technical (local) quantitative client assess- toward sustainable health goals. Process that projects can
(CSSA) Support (CSTS) Organization ment use to lead a participatory assessment with communities and
http://www.childsurvival.com Project/ORC Community local partners.
MACRO
The Institutional Strengths CSTS System Qualitative and Self and internal Multiple This self-assessment tool is currently being pilot tested by
Assessment (ISA) Tool Project/ORC (local) quantitative client CSTS.
http://www.childsurvival.com/ MACRO Organization assessment
tools/project_planning.cfm
INTRAH/PRIME INTRAH/ Organization Qualitative and Self and Multiple The framework and tool developed at the end of PRIME I
Capacity Building In Training PRIMEII quantitative internal client has been used to aid program evaluation in different
Questionnaire assessment countries (e.g., Mexico, Ghana, India, and Bangladesh),
http://www.prime2.org/prime when interventions have focused on the strengthening of
2/techreport/home/50.html training and service delivery institutions. The tool
encourages organizations to discover root causes of
obstacles with a sustainable effort to build capacity in the
organization to recognize, address, analyze and prioritize
problems.
Client-Oriented Provider Engender Health Organization Qualitative and Self-assessment Multiple COPE encourages and enables service providers and other
Efficient (COPE®) quantitative staff at a facility to assess the services they provide jointly
http://www.engenderhealth.or with their supervisors. Using various tools, they identify
g/ia/sfq/qcope.html problems, find the root causes, and develop effective
solutions.
Note: COPE has now been
adapted for use with maternal
health services and commu-
nity partnership
http://www.engenderhealth.or
g/news/newsreleases/020516.
html
Transformational Develop- World Vision Community Qualitative and External and Multiple Provides technical guidance for measuring the
ment Indicators Field Guide quantitative self-assessment Transformational Development Indicators. It includes 8
http://www.worldvision.org volumes that cover indicator definitions and methods for
NOTE: Tool not yet available collecting, analyzing, and reporting on the indicators.
online
Communication for Social Center for Community Qualitative and External and Multiple Presents model, process and outcome indicators, and some
Change: An Integrated Model Communications quantitative self-assessment data collection and analytical tools for use by communities.
for Measuring the Process and Programs
Its Outcomes (CCP)/Johns
http://164.109.175.24/Docum Hopkins
ents/540/socialchange.pdf University
62
Self/ Single/
Tool Developed By Level Methods External Multiple Short description
Assessment tools
Assessing Institutional Ca- CCP/Johns Organization Quantitative External and Multiple Scores organizational competence, commitment, clout,
pacity in Health Communica- Hopkins self-assessment instruments coverage and continuity.
tion: A 5Cs Approach University
Work in Progress.
http://www.jhuccp.org
Management/Financial PASCA Organization Quantitative External and Single Tools are in Spanish only.
Sustainability Scale (MFSS) self-assessment instrument
http://www.pasca.org
Systematic Approach Scale PASCA Organization Quantitative External and Single Tools are in Spanish only.
(SAS) self-assessment instrument
http://www.pasca.org
Institutional Assessment World Learning Organization Qualitative and External Multiple Provides a framework for assessing the institutional needs
Instrument (IAI) Project Inc. quantitative assessment instruments of a single organization or a community of organizations.
http://www.worldlearning.org Pinpoints six key areas generally agreed to be the compo-
or nents of effective institutions.
http://www.worldlearning.org/
pidt/docs/wl_instcape.pdf
Institutional Development SFPS Organization Qualitative and External Multiple Documents existing capacity and identifies potential areas
Assessment (IDA) quantitative assessment instruments of collaboration and capacity building in overall dimensions
http://www.fha- of management, financial management and technical capac-
sfps.org/documentsdownload/ ity.
Institu-
tional%20Development%20A
ssessments.PDF
Organizational Capacity Pact/Ethiopia Organization Quantitative Self-assessment Multiple A methodology for organizational capacity assessment and
Assessment Tool (OCAT) instruments strengthening that helps organizations anticipate and over-
http://www.pactworld.org come the greatest barriers to organizational change and
growth. Through a guided self-assessment and planning
process, organizations reflect upon their performance and
select the tools and strategies they need to build capacity
and broaden impact. A four-staged process that includes:
Participatory tool design; guided self-assessment; data-
guided action planning; reassessment for continual learning
that allows organizations to monitor change, track the effec-
tiveness of their capacity-building efforts, and integrate new
learning as their needs change and capabilities increase.
64
STEP 6 Develop an screened through the mental models of the
Implementation and participants to acquire any diagnostic value.”
Dissemination Plan
When developed before the evaluation begins,
The final step in planning for capacity- a dissemination strategy guides data collec-
building M&E is to develop an implementa- tion and analysis. Developing a format for
tion plan to monitor and evaluate capacity. At presentation of the results to the appropriate
a minimum, the implementation plan should audience identifies weaknesses and gaps in
include a timetable for data gathering and the evaluation plan. It also helps to guide the
review of data, individual responsibilities, a direction of the evaluation by emphasizing
dissemination strategy, and a budget. In prac- what is needed for addressing the needs of the
tice, capacity measurement, as a reflection of data users and raising awareness of possible
capacity development, is likely to be an itera- sensitivities. Gaps or excess data collection
tive process rather than a perfunctory “before becomes obvious, and further refinement of
and after” look at capacity. Experienced the number or type of indicators being meas-
evaluators (Horton et al, 2000; Lusthaus, ured is often necessary. In the process, evalu-
1999; Earl et al., 2001; Morgan, 1997) rec- ators identify all key stakeholders that should
ommend regular review and discussion of be alerted to the results, if they are not di-
monitoring results with stakeholders to guide rectly involved in the evaluation itself. The
the process of capacity development and en- recommended forum for disseminating results
courage ownership of the monitoring process. is one that promotes discussion and interac-
Setting aside enough time to present the re- tion among the key stakeholders and those in
sults periodically and allow for discussion and a position to influence the future direction of
feedback from the stakeholders will greatly the capacity-building efforts. Sufficient funds
enhance data interpretation and the impact of must be set aside so that all those who make a
the evaluation itself. As Morgan (1997) notes, credible contribution to the evaluation receive
“Indicators by themselves provide few an- at least summary results in a timely and rele-
swers. The information they produce must be vant fashion.
68
ized methods, triangulate methods and
data sources, and use data interpretation
workshops.
72
Dimensions Objectives Indicator Scoring
IV. Organization Leadership 12. Training plans are linked with 0=Providers’ training plans are not
Vision of training as a means to quality of care and increased coupled with service and quality of
improve services service access care objectives, to
4=Training plans form part of the
quality of care and service
improvement strategies
Training is an integral part of 13. A training plan and activities 0=Training is not part of the
organization’s strategic planning are part of the organization’s organization’s strategic plan, to
strategic plans 4=Training is part of the organization’s
long-term strategic plan (multiannual)
Promotion of public-private 14. Evidence of public-private 0=No evidence of public-private
collaboration collaboration collaboration, to
4=Evidence of public-private
collaboration
Infrastructure 15. Active training units exist at 0=No decentralized training units (even
Existence of decentralized training central and peripheral levels if there is one at central level), to
units in all areas 4= Active training units in central and
peripheral levels
Human resource development 16. HR development is part of a 0=Training is not coupled with
HR training (TOT, formative and performance improvement (PI) providers’ improvement objectives, to
refresher courses) is an integrated strategy 4=Training is part of HR development
part of a Performance and performance
Improvement system (e.g.,
incentives, follow-up and
supervision, efficacy)
Administrative 17. Existence and use of a 0=No TNA customarily done, to
Existence of a reporting system Training Needs Assessment 4=TNA is integral and continuous part
for tracking number and of training strategy
characteristics of trainees and
materials, according to needs 18. Existence of an MIS for 0=No MIS for tracking progress, to
trainees and materials matching 4=MIS for training
TNA
74
Annex B Example of Results of PRIME Training Capacity Index (Catotti, 1999)
100.0
56.7
35.8
1997
Score
10.0
1999
http://erc.msh.org/
http://www.msh.org/
The Health Manager's Toolkit is an electronic compendium of tools designed to assist health
professionals at all levels of an organization to provide accessible, high-quality, and sustainable
health services. It is particularly useful for managers who lead others to produce results.
The Health Manager’s Toolkit includes spreadsheet templates, forms for gathering and analyzing
data, checklists, guidelines for improving organizational performance, and self-assessment tools
that allow managers to evaluate the systems underlying their entire organization. The tools have
been developed by organizations working throughout the world to improve delivery of health
services.
2. INTRAH/Prime II
http://www.prime2.org/
The PRIME II Project is a partnership combining leading global health care organizations dedi-
cated to improving the quality and accessibility of family planning and reproductive health care
services throughout the world. Funded by USAID and implemented by the University of North
Carolina at Chapel Hill School of Medicine, PRIME II focuses on strengthening the performance
of primary care providers as they work to improve services in their communities. To accomplish
its goals, PRIME II applies innovative training and learning and performance improvement ap-
proaches in collaboration with host-country colleagues to support national reproductive health
goals and priorities.
This interactive Website, created by the PRIME II Project and INTRAH, presents a revised edi-
tion of Performance Improvement Stages, Steps and Tools, first issued in print form in 2000.
INTRAH/PRIME II published this site online in August 2002 (www.intrah.org/sst/).
3. JHPIEGO
http://www.jhpiego.org
Through advocacy, education and performance improvement, JHPIEGO helps host-country poli-
cymakers, educators and trainers increase access and reduce barriers to quality health services,
especially family planning and maternal and neonatal care, for all members of their society.
JHPIEGO’s work is carried out in an environment that recognizes individual contributions and
encourages innovative and practical solutions to meet identified needs in low-resource settings
throughout Africa, Asia, and Latin American and the Caribbean.
TIMS is a computer-based tool to track and monitor training efforts. Each person’s skills, quali-
fications, and location are stored, along with courses taken and taught, through a Microsoft Ac-
cess 2000 database application that stores information about training course content, timing, par-
ticipants, and trainers. In the standard form, TIMS tracks the following training results over a
period of time:
- Which providers from which service sites have been trained, and in what topic(s)
- Which trainers have been conducting courses, and how many people they have trained
- How many courses have been held, summarized by training center, district, or province
TIMS allows senior and mid-level program managers to monitor the variety of training activities
and track results in a number of perspectives. TIMS is designed to be part of a country’s training
information system, replacing paper-based reporting and aggregation with a computer database.
Ministries of Health, Planning and/or Finance can use TIMS to supplement service information
for policy decisions on training, retraining, and provider deployment.
78
4. Child Survival Technical Support Program (CSTS)
http://www.childsurvival.com/
The Child Survival Technical Support Project (CSTS) assists PVOs funded through the Office of
Private and Voluntary Cooperation's Child Survival Grants Program. The technical support
CSTS provides to PVOs is targeted specifically towards increasing their capacity to achieve
sustainable service delivery in public health interventions.
The program’s goal is to help these organizations grow and to develop successful programs that
will continue to serve mothers, children, and communities even when the PVO is no longer pres-
ent in the area.
http://www.idrc.ca/
The International Development Research Centre (IDRC) is a public corporation created in 1970
to help developing countries find long-term solutions to the social, economic, and environmental
problems they face. IDRC’s Evaluation Unit has been working in the area of organizational as-
sessment for over 5 years and has developed a number of tools, including: Enhancing Organiza-
tional Performance, a guidebook that presents an innovative and thoroughly tested model for
organizational self-assessment. The tools and tips presented in the guidebook go beyond meas-
uring the impact of programs, products, and services to integrate techniques of formative as-
sessment, in which the assessment team becomes involved in helping its organization become
more effective in meeting its goals. The tools and techniques are flexible, and the model can be
adapted to any type or size of organization. Worksheets and hands-on exercises are included.
http://iisd1.iisd.ca/measure/
IISD has been working on measurements and indicators since 1995, with the aim of making sig-
nificant local, national, and international contributions, and building the Institute into a world
center of expertise in this field. One of IISD’s strategic objectives is to develop robust sets of
indicators for public and private sector decision-makers to measure progress toward sustainable
development and to build an international consensus to promote their use.
http://www.who.int/whr2001/2001/archives/2000/en/index.htm
http://www.dec.org/
The DEC includes Evaluation Publications such as the TIPS series, which provides guidance on
using the Results Framework, measuring institutional capacity and general quality of indicators
and performance measures.
9. Pact
http://www.pactworld.org/services/oca/index_oca.htm
http://www.pactworld.org/
Pact’s unique methodology for organizational capacity assessment and strengthening (OCA)
helps organizations anticipate and overcome the greatest barriers to organizational change and
growth. Through a guided self-assessment and planning process, organizations reflect upon their
performance and select the tools and strategies they need to build capacity and broaden impact.
Pact's OCA is the product of ten years of research and field practice in partnership with the Edu-
cation Development Center and USAID’s Office of Private & Voluntary Cooperation. Hundreds
of local and international NGOs, private-sector corporations, and municipal governments around
the world have used this methodology.
OCA is a four-staged process that includes:
· Participatory tool design that empowers organizations to define the critical factors that
influence their performance and to identify relevant indicators for evaluating their com-
petency.
· Guided self-assessment that leads employees, board members, and constituents through
structured discussions followed by individual scoring on a series of rigorous performance
indicators.
· Data-guided action planning that provides organizations with an opportunity to interpret
the self-assessment data and set change strategies most appropriate to their environment.
80
· Reassessment for continual learning that allows organizations to monitor change, track
the effectiveness of their capacity-building efforts, and integrate new learning as their
needs change and capabilities increase.
For more information on Pact’s Organizational Assessment, please contact Betsy Kummer by
email ([email protected]).
From the Roots Up: Strengthening Organizational Capacity through Guided Self-Assessment
by World Neighbors
Publisher: World Neighbors
Year: 2000
www.aidsalliance.org/ngosupport
The AIDS Alliance has developed an HIV/AIDS NGO/CBO Support Toolkit that is available on
their Website or by CD-Rom with over 500 downloadable resources and supporting information.
The toolkit includes practical information, tools and example documents to help those working to
establish or improve NGO/CBO support programs. The toolkit also describes key components of
NGO/CBO support programming, based on the Alliance's experience. It also includes resources
from a wide range of other organizations to bring different perspectives and experiences to-
gether.
The HIV/AIDS NGO/CBO Support toolkit has been developed for those wishing to establish or
improve NGO/CBO support programs. The toolkit will be useful both for NGO-led support pro-
grams and for government-led or multi-sectoral programs, especially in the context of Global
Fund and World Bank financing for NGOs and CBOs working on AIDS. The toolkit will also be
useful to organizations that provide only funding or only training.
Order single or bulk copies of the CD-ROM and supporting publication free of charge from:
[email protected]
http://www.intrac.org/
International NGO Training and Research Centre (INTRAC) provides support to organizations
involved in international development. Their goal is to improve the performance of NGOs by
exploring relevant policy issues and by strengthening NGO management and organizational ef-
fectiveness.
Practical Guidelines for the Monitoring and Evaluation of Capacity-Building: Experiences from
Africa
ISBN: 1 897748-64-7
OPS No. 36, November 2001.
Capacity building and monitoring and evaluation have become two of the most important priori-
ties of the development community during the last decade. Yet they have tended to operate in
relative isolation from each other. In particular, capacity-building programs have been consis-
tently weak in monitoring the impact of their work. This publication aims to help NGOs and do-
nors involved in capacity building to develop appropriate, cost-effective and practical systems
for monitoring and evaluation. While not under-estimating the complexity of these tasks, this
publication puts forward some practical guidelines for designing monitoring and evaluation sys-
tems based on experiences with three organizations in different parts of Africa.
82
12. Performance Improvement in Healthcare
http://www.picg.net/
This Website is designed to provide information, tools, and guidelines for planning, implement-
ing, monitoring and evaluating performance improvement processes and activities in health
services delivery organizations. The site is especially tailored for managers, leaders, providers
and other employees working in international health organizations and institutions, whether they
are health ministries or health departments in the public sector or NGOs in the private non-profit
sectors. The site is also for those working as partners with people in these institutions.
Performance Improvement (PI) is a process for enhancing employee and organizational perform-
ance that employs an explicit set of methods and strategies. Results are achieved through a sys-
tematic process that considers the institutional context; describes desired performance; identifies
gaps between desired and actual performance; identifies root causes; selects, designs and imple-
ments interventions to fix the root causes; and measures changes in performance. PI is a continu-
ously evolving process that uses the results of monitoring and feedback to determine whether
progress has been made and to plan and implement additional appropriate changes.
The goal of PI is to solve performance problems or realize performance opportunities at the or-
ganizational, process or systems and employee levels in order to achieve desired organizational
results. The overall desired result in our field is the provision of high quality, sustainable health
services.
The Website includes information on the performance improvement process and factors affecting
worker performance, PI tools, and experiences using PI in different health care settings,
13. Capacity.org
http://www.capacity.org/index_en.html
Capacity.org is a Website dedicated to advancing the policy and practice of capacity building in
international development cooperation. Issue 14 of the web-based magazine Capacity.org pres-
ents highlights of the UNDP initiative on capacity building and related information on the policy
and practice of capacity building in international development cooperation (also see UNDP web-
site at http://www.undp.org/dpa/publications/capacity.html).
http://www.isnar.cgiar.org/ecd/index.htm
This site promotes the use of evaluation as a tool to advance the development of organizational
capacity and performance. Its main purpose is to support a group of managers and evaluators
The site features the ECD Project's activities since 2000 and its result to date. It provides access
to project reports and events. Lists of useful concepts and terms, bibliographic references and
Internet resources are also provided for use by capacity developers and evaluators
http://www.reflect-learn.org/
The project links a direct service, based on the Internet, and a research agenda designed to create
knowledge about self-reflection and its contribution to organizational learning. The OSR project
seeks to engage diverse organizations in the use of self-reflection resources and also catalyzes
the development of a learning community that focuses on OSR, organizational learning, and the
use of the Internet for institutional strengthening. Several useful frameworks and tools for or-
ganizational assessment are presented
http://www.undp.org/dpa/publications/capacity.html
Developing Capacity through Technical Cooperation: Country Experiences provides some con-
crete inputs to rethinking technical cooperation for today’s challenges based on six country
studies – Bangladesh, Bolivia, Egypt, Kyrgyz Republic, Philippines and Uganda.
Capacity for Development: New Solutions to Old Problems, with prominent academics and de-
velopment practitioners as contributors, proposes new approaches to developing lasting indige-
nous capacities, with a focus on ownership, civic engagement and knowledge. It is a contribution
to a process of debate and dialogue around the broader issue of improving effective capacity de-
velopment.
Development Policy Journal is a new forum for presenting ideas on applied policies. The subject
of capacity for sustainable development is addressed in this first issue.
84
17. EngenderHealth
http://www.engenderhealth.org
http://www.eval.org
http://www.evaluationcanada.ca/
The Canadian Evaluation Association is dedicated to the advancement of evaluation for its
members and the public. This site is also available in French.
http://www.wmich.edu/evalctr/
The Evaluation Center, located at Western Michigan University, is a research and development
unit that provides national and international leadership for advancing the theory and practice of
evaluation, as applied to education and human services.
http://freenet.tlh.fl.us/~polland/qbook.html
This site contains a complete manual entitled Essentials of Survey Research and Analysis: A
Workbook for Community Researchers, written by Ronald Jay Polland, Ph.D.,1998.
http://www.uni-koeln.de/ew-fak/Wiso/
This is the homepage for the German Center for Evaluation at the University of Cologne. It in-
cludes the German translation of the Program Evaluation Standards of the American Evaluation
Society.
86
6. Government Performance Information Consultants
http://members.rogers.com/gpic/evalwebindex.htm
http://www.maeeval.org/
The Evaluation Promotion Committee has compiled a list of resources in an effort to provide
MAE members and others interested in evaluation with sources for educational materials, tools,
and other resources that may be interesting and helpful. For each resource, the site provides a
brief description (generally from the resource itself) and where to find it.
http://www.innonet.org/
http://home.wmis.net/~russon/icce/
http://www.mande.co.uk/
MandE News is a news service focusing on developments in monitoring and evaluation methods
relevant to development projects and programs with social development objectives. It is edited
by Rick Davies in Cambridge, UK who can be contacted by email ([email protected]).
http://www.socio.com/eval.htm
Sociometrics offers a wide variety of evaluation products and services to professionals across the
world. Their evaluation workshops and training services, technical publications, evaluation tools,
and data sets are all designed to assist practitioners, administrators, evaluators, and funders of
social interventions to design and implement successful evaluation systems.
http://trochim.human.cornell.edu/kb/conmap.htm
Bill Trochim is a faculty member at Cornell University; his work in applied social research and
evaluation is described on this site. His published and unpublished papers, detailed examples of
current research projects, useful tools for researchers, an extensive online textbook, a bulletin
board for discussions and links to other websites related to applied social research methods are
included. Concept mapping is a general method that can be used to help individuals or groups to
describe their ideas about some topic in a pictorial form.
13. UNICEF
http://www.unicef.org/reseval/
This site lists some of the monitoring and evaluation tools recently developed by UNICEF and
its partners, including the UNICEF Guide to Monitoring and Evaluation.
http://www.unitedway.org/outcomes/
The United Way’s Resource Network on Outcome Measurement offers a guide to resources for
measuring program outcomes for health, human service and youth- and family-serving agencies.
Their manual, Measuring Program Outcomes: A Practical Approach, can be ordered at the
Website.
88
15. National Science Foundation, Division of Research, Evaluation and Communication
(REC)
http://www.nsf.gov/pubsys/ods/getpub.cfm?nsf97153
This site contains a complete manual, User-Friendly Handbook for Mixed Method Evaluations
(August 1997), edited by Joy Frechtling and Laurie Sharp Westat, and developed with support
from the National Science Foundation, Division of Research, Evaluation and Communication.
What is it? Tool for M&E planning (primarily) Tool for improving RH services
What is the purpose? Helps planners and evaluators decide: What M&E approach to Helps managers decide: what PI strategy to use? Did perform-
take to determine whether this strategy succeeded in building ance change as a result of the PI process?
capacity (primary use)? What capacity-building strategy to
use? (secondary use).
Answers the question… What factors of capacity are required for performance? How Is progress being made toward goals? Are appropriate actions
should I measure these factors? being undertaken to promote achieving those goals? What are
the problem areas?
What is the approach? Conceptual: Evaluators are encouraged to consider a wide Focused: Root causes of performance problems are linked to
range of factors that might influence capacity and performance. six performance factors - job expectations; performance feed-
back; workspace, equipment, and supplies; incentives; organ-
izational support; and knowledge and skills.
Guides planners and evaluators in viewing capacity systemati- Guides organizations in viewing problems systematically and
cally and identifying all areas that affect performance. addressing all areas that enhance performance.
Encourages understanding of capacity in the health sector as a Encourages understanding of the organization as a system of
system that includes four interdependent levels: the system, interdependent functions and people.
organizations, health personnel, individuals and communities.
When to use it? Can be used to organize and analyze information before or Used to organize and analyze information before deciding what
after a capacity-building intervention is designed. intervention is needed.
Focus of study/action Applies to systems, organizations, humans, and communities Applies to humans within organizational systems
View of performance Performance is the result of capacity and context Human performance is a factor of knowledge, skills, capacity
and motives, and context
Capacity is the ability to carry out stated objectives. It has also been described as the “stock of
resources” available to an organization or system as well as the actions that transform those re-
sources into performance.
Capacity building (or capacity development) is a process that improves the ability of
a person, group, organization, or system to meet objectives or to perform better.
Capacity evaluation is normally more complex than monitoring, and is conducted to gain un-
derstanding of the relationship between capacity-building interventions and capacity outcomes,
or the links between capacity and performance variables.
Capacity mapping is a structured process of thinking through the role capacity plays in ensuring
performance by developing a conceptual framework that is specific to a particular capacity-
building intervention. During capacity mapping, all the possible factors of capacity that influence
performance and the relationships between them must be identified. Once the factors are all laid
out, the program staff or evaluator can focus on those that are most essential for the evaluation.
Capacity monitoring normally would be used to understand the effectiveness and efficiency of
a capacity-building intervention during implementation (i.e., is capacity improving and at what
cost?) to contribute to strategic or operational decisions related to capacity building or enable a
periodic look at a program or system.
Cold chain: The system that ensures vaccine viability from manufacturing to delivery.
Contextual factors: external factors relating to the economic, social, cultural and political envi-
ronment. Factors normally outside the control of most health sector actors.
Impact: Long-term results achieved through improved performance of the health system: sus-
tainable health system and improved health status. Impact measures are not addressed in capac-
ity-building M&E.
Input: Set of resources, including service personnel, financial resources, space, policy orienta-
tion, and program service recipients, that are the raw materials that contribute to capacity at each
level (system, organization, health personnel, and individual/community).
Outcome: Set of results that represent capacity (an ability to carry out stated objectives), often
expected to change as a direct result of capacity-building intervention.
Glossary 93
Output: Set of products anticipated through the execution of practices, activities, or functions.
Performance: Set of results that represent productivity and competence related to an established
objective, goal or standard. The four capacity levels together contribute to overall system-level
performance.
Process: Set of activities, practices, or functions by which the resources are used in pursuit of the
expected results.
Theory of action: Part of a capacity-building plan that includes common objectives and shared
concepts. A coherent theory of action agreed on by the key groups involved in the process states
how activities are expected to produce intermediate and longer-term results and benefits. “With-
out a theory of action, a capacity development effort could become a fragmented exercise in
wishful thinking, rather than a coherent initiative with a high probability of success” (Horton,
2001).
Triangulation: The use of multiple data sources or methods to validate findings, discover errors
or inconsistencies, and reduce bias.
94
Bibliography
Africa Bureau, Office of Sustainable Development USAID. 1999. Health and Family Planning
Indicators: Measuring Sustainability, Volume II. Washington: USAID.
Ampomah, K. 2000. PRIME’s Technical Report 20: An Assessment of the Impact of PRIME’s
Interventions on the Training Capacity and Reproductive Health Service Delivery in Ghana.
2000. Chapel Hill, NC: INTRAH.
Brown, L., LaFond, A., Macintyre, K. 2001. Measuring Capacity Building. Chapel Hill:
MEASURE Evaluation Project.
Catotti, D. 1999. PRIME’s Technical Report 13: Improving the Quality and Availability of Fam-
ily Planning and Reproductive Health Services at the Primary Care Level: Institutional Capacity
Building in the El Salvador Ministry of Health. Chapel Hill: INTRAH.
Earl, S., Carden, F., and Smutylo, T. 2001. Outcome Mapping: Building Learning and Reflection
into Development Programs. Ottawa: International Development Research Centre.
Eng, E. and Parker, E. 1994. Measuring Community Competence in the Mississippi Delta: The
Interface between Program Evaluation and Empowerment. Health Education Quarterly 21 (2):
199-220.
Figueroa, M.E., Kincaid D.L., Pani, M. and Lewis, G. 2002. Communication for Social Change:
An Integrated Model for Measuring the Process and Its Outcomes. Communication for Social
Change Working Paper Series, No. 1. Baltimore: Johns Hopkins Center for Communications
Programs.
Fort, Alfredo. 1999. PRIME’s Technical Report 16: Capacity Building in Training: A Frame-
work and Tool for Measuring Progress. Chapel Hill: INTRAH.
Franco, L.M., Bennett, S. and Kanfer, R. 2002. Health Sector Reform and Public Sector Health
Worker Motivation: A Conceptual Framework. Social Science and Medicine 54: 1255-1266.
Goodman, R.M., Speers, M.A., McLeroy, K., Fawcett, S., Kegler, M., Parker, E., et al. 1998.
Identifying and Defining the Dimensions of Community Capacity to Provide a Basis for Meas-
urement. Health Educ Behav 25 (3): 258-278.
Gubbles, P., Koss C. 2000. From the Roots Up: Strengthening Organizational Capacity through
Guided Self-Assessment. Oklahoma City: World Neighbors.
Bibliography 95
Horton, D. 2002. Capacity Development in Planning, Monitoring, and Evaluation: Results of an
Evaluation. Briefing Paper No. 51. ISNAR.
Horton, D. 2002. Planning, Implementing and Evaluating Capacity Development. Briefing Paper
No. 50. ISNAR
Horton, D. (ed). 2001. Learning about Capacity Development through Evaluation Perspectives
and Observation from a Collaborative Network of National and International Organization and
Donor Agencies. The Hague: International Service for National Agricultural Research.
Horton, D., Mackay, R., Andersen, A., and Dupleich, L. 2000. Evaluating Capacity Development
in Planning, Monitoring, and Evaluation: A Case from Agricultural Research. Research Report
17. The Hague: International Service for National Agricultural Research.
INTRAC. 1998. The Newsletter of the International NGO Training and Research Center. No. 11.
Israel, B., Schultz, A., Parket, E., and Becker, A. 1998. Review of Community-Based Research:
Assessing Partnerships Approaches to Improve Public Health. Annual Review of Public Health
19:173-202.
Israel, B.A., Checkoway, B., Schulz, A., Zimmerman, M., 1994. Health Education and Commu-
nity Empowerment: Conceptualizing and Measuring Perceptions of Individual, Organizational,
and Community Control. Health Education Quarterly 21(2): 149-170.
James, R. 2001. Practical Guidelines for the Monitoring and Evaluation of Capacity Building
Experience from Africa. London: Intrac.
Kaul Shah, M., Degnan Kambou, S., Monahan, B. 1999. Embracing Participation in Develop-
ment: Wisdom from the Field. Atlanta: CARE Health and Population Unit.
Knight, R.J., Tsui, A.O. 1997. Family Planning Sustainability at the Outcome and Program Lev-
els: Constructing Indicators for USAID Strategic Planning. Chapel Hill: The EVALUATION
Project.
Kotellos, K.A., Amon J.J., Githens Benazerga, W.M. 1998. Field Experiences: Measuring Ca-
pacity Building Efforts in HIV/AIDS Prevention Programs. AIDSCAP Family Health Interna-
tional, AIDS 12 (suppl. 2): 109- S117.
LaFond, A., Brown, L. and Macintyre, K. 2002. Mapping Capacity in the Health Sector: A Con-
ceptual Framework. International Journal of Health Planning and Management. 17:3-22.
96
Lake, S., Daura, M., and Mabanddhala, M., et al. 2000. Analyzing the Process of Health
Financing Reform in South Africa and Zambia. Zambia Country Reports. Major Applied Re-
search Technical Paper 1. Bethesda: Partnerships for Health Reform Project.
Lande, R.E. 2002. Performance Improvement. Population Reports, Series J, No. 52, Baltimore:
The Johns Hopkins Bloomberg School of Public Health, Population Information Program.
Luoma, M. 2000. PRIME’s Technical Report 19: Dominican Republic Performance Improve-
ment Project Evaluation. Chapel Hill: INTRAH.
Lusthaus, C., Adrien, M., Andersen, G., and Carden, F. 1999. Enhancing Organizational Per-
formance: A Toolbox for Self-Assessment. Ottawa: International Development Research Centre.
Mackay, R. and Horton, D. 2002. Capacity Development in Planning, Monitoring, and Evalua-
tion: Results of an Evaluation. Briefing Paper No. 51. ISNAR.
Management Sciences for Health. 1996. Planning for Sustainability: Assessing the Management
Capabilities of Your Organization. The Family Planning Manager. FPMD.
McCaffrey, J., Luoma, M., Newman, C., Rudy, S., Fort, A., Rosensweig, F. 1999. PI Stages,
Steps and Tools, Chapel Hill: INTRAH.
MEASURE Evaluation. 1998. The Needs Assessment Validation Study and 1998 Institutional
Capacity Assessment, PASCA Project. Chapel Hill: MEASURE Evaluation Project.
MEASURE Evaluation. 2001. Mapping Capacity in the Health Sector: Application of the
MEASURE Conceptual Framework in Measuring Capacity Building for a Complex Non-
governmental Organization. Draft.
Moore, M., Brown, L., and Honan, J. 2001. Toward a Public Value Framework for Accountabil-
ity and Performance Management for International Non-Governmental Organizations. Presented
at Hauser Center/Keio University Workshop on Accountability for International Non-
governmental Organizations, November 2 – 11, 2001.
Morgan, P. 1997. The Design and Use of Capacity Development Indicators. CIDA.
Murray, C.J.L., Frenk J. 1999. A WHO Framework for Health System Performance Assessment.
Geneva: World Health Organization.
Oakley, P. 2001. Evaluating Empowerment: Reviewing the Concept and Practice. INTRAC
NGO Management and Policy Series No.13. London: INTRAC.
Partnerships for Health Reform. 1997. Measuring Results of Health Sector Reform for System
Performance: A Handbook of Indicators. Bethesda: Partnerships for Health Reform.
Bibliography 97
Plummer, J. 1999. Municipalities & Community Participation: A Sourcebook for Capacity
Building. London and Sterling, VA: Earthscan Publications Ltd.
PRIME II. 2001. PRIME II Performance Monitoring Plan (PMP): General Guidelines for Moni-
toring and Reporting on the 10 Key PMP Indicators. Version 1.01. Chapel Hill: INTRAH.
Pyle, D. and LaFond, A. 2001. Case Example: Measuring Capacity Building in Training -
PRIME’s Evaluation, Documentation and Dissemination (EDD); draft.
Ross, J.A. and Mauldin, W.P. 1996. Family Planning Programs: Efforts and Results, 1972-1974.
Studies in Family Planning 27 (3):137-147.
Sarriot, E. 2002a. The Child Survival Sustainability Assessment (CSSA), For Shared
Sustainability Evaluation Methodology in Child Survival Interventions. Maryland: CORE –
CSTS.
Sarriot, Eric. 2002b. Sustaining Child Survival: Many Roads To Choose, but Do We
Have a Map? Maryland: ORC Macro.
Sullivan, T. and Bertrand, J. (eds). 2000. Monitoring Quality of Care in Family Planning by the
Quick Investigation of Quality (QIQ): Country Reports. Technical Report Series, No. 5. Chapel
Hill: MEASURE Evaluation.
UNICEF. 1999. M&E of Capacity Building: Guidance and Tools. Working Draft.
World Health Organization. 2000.World Health Report 2000: Health Systems: Improving Per-
formance. Geneva: World Health Organization.
98