Evaluations use specific evaluation questions to address each of the standard evaluation criteria
The five evaluation criteria, referenced in the CRS Global standard on evaluation, are
relevance, effectiveness, efficiency, impact and sustainability. The Organization for
Economic Cooperation and Development (OECD) created these criteria in 1991 as
part of general evaluation principles.
Together these five criteria are widely viewed
asthe cornerstones for high-quality evaluations of development programming,
particularly for midterm and final evaluations. Additional information is available
on these criteria in ProPack II.
Each of the criteria covers multiple concepts and ideas that the evaluation needs to
address. The evaluation team should develop project-specific evaluation questions
under each of the criteria to ensure that all of the important concepts are covered.
These evaluation questions are then used to design the evaluation methodology,draft the data collection tools and structure the analysis of the findings. Examples of
these evaluations questions are included in Table 1. Note that these examples are
generic and should be made more specific to better fit the project‘s context.
Table 1. Evaluation criteria and key words and concepts.
Criteria Examples of evaluation questions
Relevance Did the initial needs assessment identify priority community
needs? Did the assessment differentiate between needs for men
and women and for more vulnerable and less vulnerable
households? If so, how? If not, why not?
Is the project design appropriate for meeting the community
priority needs? Consider the project‘s objectives, activities and
timing. Why or why not?
Did the targeting strategy allow the project to meet the greatest
need in the community (i.e., the most vulnerable households or
individuals)? Why or why not?
Was community participation sufficient throughout the needs
assessment, design, implementation, and monitoring and
evaluation of the project? Why or why not? If not, how can
participation be increased during the remainder of the project (for
midterm evaluations) or in a future project (for final evaluations)?
Has the project met the specific needs and priorities of women?
Why or why not?
Effectiveness Did the project achieve its planned outputs (as per the detailed
implementation plan) on the planned timeline? Why or why not?
Did the M&E system provide the right information at the right
time to allow for timely project management and decisionmaking? Why or why not?
Has working in partnership increased the effectiveness and quality
of the project? Why or why not?
Has the project been effective in building partner capacity? If so,
how has partner capacity been built? If not, why not? If not, how
can this be improved for next time?
Efficiency Are the project‘s staffing and management structures efficient?
Why or why not?
Did the project staff have the right capacity to implement a highquality project? Why or why not?
What was the cost per project participant? Is this reasonable given
project impact? Why or why not?
Impact Has the project achieved its planned impact (refer to ProFrame
indicators to determine planned impact)? Why or why not?
Did impact vary for different targeted areas, households or
individuals (e.g., men and women)? If so, how and why?
Was there any unintended impact from the project, either positive
What impact was most valuable to participating communities?
Sustainability What is the likelihood that the community will be able to sustain
the impact of the project? How do you know?
What has the project done to support community structures or
groups to be able to continue to address community needs and
sustain project impact? Is this sufficient?
How do you use the ProFrame in the evaluation? The ―impact‖ evaluation criterion
asks that the project team measure progress against all of the SO-level indicators and
IR-level indicators included in the ProFrame. In addition, under the ―impact‖
criterion, the project team should determine if there has been any unanticipated
impact from the project, either positive or negative.
Evaluation questions are important for midterm evaluations, final evaluations and
real-time evaluations of emergency responses. In addition, develop questions for
although they would be called ―review questions‖ in this
context. For a midterm evaluation or review, the questions should include a focus on
how to improve the particular activity or process for the remainder of the project.
For final evaluations, the questions should encourage project teams to think about
how to improve an activity or element for similar projects in the future.
Real-time evaluations of emergency responses
A real-time evaluation is a light evaluation conducted early—approximately six to
eight weeks after a response begins. The purpose of this evaluation is to reflect on
the progress and quality of the response and to produce a set of actionable
recommendations to improve the ongoing response. Due to its nature and timing,
slightly different criteria are used in a real-time evaluation. The standard criteria are
relevance, effectiveness, coordination, coverage and sustainability/connectedness.
Additionally, real-time evaluations may look at the early impact of the response. For
more information, refer to the CRS guidance on conducting real-time evaluations.
The final evaluation of an emergency response would use the standard evaluation
Without tailored evaluation questions that reflect the context and focus of the
program, the evaluation is likely to produce generic results and be void of relevant
lessons learned and useful recommendations for future programs. Tips for
developing high-quality evaluation questions include:
Engage the project field team and partner staff in developing evaluation
questions that reflect the project context;
Review the monitoring data collected to see if the findings raise any
additional questions to be answered by the evaluation;
Refer to the ProFrame or M&E plan to make sure that all of the SO-level and
IR-level indicators will be covered by the evaluation. In addition, ensure that
the evaluation addresses any crosscutting themes included in the M&E plan;
Refer to donor guidance to ensure that the evaluation meets donor-required
indicators and general information needs;
Draw upon the project‘s analysis plan, if available, todevelop the questions.
The analysis plan should include draft evaluation questions; and
Review other evaluation reports for similar projects for ideas about how to
phrase questions. However, it is not advisable to simply copy questions from
other evaluations as they will rarely be a good fit ―as is‖ for your project.
Remember that evaluation questions are generally too complex to use in data
collection tools. Instead, use your evaluation questions to outline your tools
and determine which specific question or set of questions will be appropriate
to generate the data you will need for analysis.
AnnexA provides evaluation planning tables and presents eight steps for good
evaluation planning. Step 1 is to create specific evaluation questions for your
program under each of the standard evaluation criteria. These tables provide
guidance on how to use questions to structure the evaluation methodology and data
collection tools and should be the basis for evaluation planning.
It is often appropriate to consult community members who did not participate in the
project during midterm and final evaluations to solicit their input on the
appropriateness of targeting and the overall impact, positive and negative, of the
project. Consider which evaluation questions should take input from these community
members into account and include them as respondents where needed in the
evaluation planning tables (Annex A).