Eight steps for evaluation planning
Eight steps for evaluation planning
Step 1: Create specific evaluation questions for your program under each of the five standard evaluation criteria: relevance,
effectiveness, efficiency, impact and sustainability.
Enter these questions in the first column of Table 1 below.
Make the questions specific, including the ―who,‖ ―what,― ―where― and ― when,― as applicable.
Under impact, include questions about whether the project achieved the impact stated in the indicators in your M&E plan.
If your project has an analysis plan, include any evaluation questions from the plan. Note: questions in the analysis plan are
generally considered to be drafts and can be revised as needed.
Key concepts and issues associated with each evaluation criterion are presented after each of the criteria in Table 1. More
information on the evaluation criteria is available in ProPack II.
There is no set required number of evaluation questions. Generally, programs have three to five questions under each of the
criteria. Add or delete rows based on the number of questions needed.
Step 2: Identify the appropriate tools and respondents for each evaluation question. Include these in Table 1.
Determine which tools will give the most reliable data or information for the question. Common evaluation tools include
household surveys, key informant interviews with community or government stakeholders, focus group discussions with
participating and nonparticipating community members, observations, staff interviews with CRS and partner staff and
review of project records or meeting notes.
For household surveys, focus group discussions, key informant interviews and staff interviews, specify who the respondent
group will be (e.g., project participants, nonparticipating community members, CRS staff or partner staff). This will help in
outlining the tools in Table 2.
There is no fixed number of tools or respondents required. Consider when it is appropriate to triangulate information from
different methods or different perspectives with the same method for a given evaluation question. Add and delete columns
for tools as needed.
Step 3: Create an outline for the tools in Table 2. Enter each tool in Table 1 in the first column of Table 2. Copy the evaluation
questions that the tool will be used to answer in the next column.
List separately the tools tobe used with different respondents (e.g.,FGDs with community members who participated and
community members who did not participate in the project).
Refer to the M&E plan. Make sure the list of tools reflects all of the methods included in the M&E plan. Include any missing
tools and list the indicators that each tool will answer in the second column.
Step 4. Specify any comparison groups needed for each tool in Table 2.
Determine whether there are any relevant comparison groups needed for surveys, focus groups, key informant or
semistructured interviews or observation tools. Refer to your M&E plan and analysis plan. You may need comparison
groups where the context is very different within the project area or where different groups have had different experiences
or perspectives during the project. Include triangulation as appropriate.
Step 5. Determine the sampling strategy and selection methodology for each tool. Enter this in Table 2.
Use random sampling for quantitative tools and purposive sampling for qualitative tools. Refer to Random Sampling and
Purposeful Sampling. Include all information relevant for the sample—clustering, stratification, level of error and number
needed for random sample, and perspectives and number needed for purposive sample. Note: The number needed will be the
number of respondents for random sampling. The number needed for purposive sampling will be the number of groups or interviews.
Step 6. Create draft tools from the outline of information needs included in Table 2.
Refer to Developing Quantitative Tools and Developing Qualitative Tools to develop tools to answer your evaluation
questions. Note the evaluation questions themselves are generally too complex to include in the data collection tools
Allow enough time for feedback on the tools from M&E and project team members. Revise the tools during training or field testing if needed.
Step 7. Determine staff needs for data collection.
Determine the number of staff needed for data collection. Make sure that female staff are adequately represented on the
team to collect data from female community members.
Step 8. Develop a timeline for the evaluation.
Make the timeline as specific as possible. Include finalizing the data collection tools, training the data collection, field-testing
the tools, data collection, analysis, a staff reflection workshop and report writing.