Step 5: Plan the Evaluation

people working together

Step 5 focuses on planning the evaluation, yet this process actually begins much earlier in the strategic planning process. Evaluation involves collecting, analyzing, and using data to determine the following:

  • If your activities are effective
  • If your activities are efficient
  • If your activities are being delivered as intended
  • If activity participants are satisfied with the activities
  • How your activities might be refined or improved

This is not an exhaustive list of all the benefits evaluation can provide. Evaluation can also show the value of your activities to funders, decision makers, and other stakeholders.

For a more detailed description of Step 5, visit SPRC’s online course A Strategic Planning Approach to Suicide Prevention

What’s Involved in Planning an Evaluation?

Evaluations can measure the outcomes of your activities. Because evaluation can involve tracking the implementation of your activities, it can also be an important part of implementing your activities successfully. Evaluation planning begins early. This ensures that you are asking questions that are important to your stakeholders, and that the activities you have selected are well aligned with the outcomes you expect. Evaluation involves creating a feasible evaluation strategy that is within your budget and clearly linked to questions you want the evaluation to answer about your prevention activities.

There are four main tasks when planning an evaluation: 1

Engage Stakeholders

Stakeholders include funders, program participants, invested community members, and anyone else interested in your evaluation results. They can help you determine the specific questions you would like your evaluation to answer. As you design your evaluation, be sure to communicate with various stakeholders to let them know what will, and will not, be addressed by your program and evaluation.

Describe Your Activities

Consider how you will demonstrate the connection between your activities and the outcomes you hope to achieve. Use your logic model (developed in Step 4) to communicate these connections. Make sure your outcomes are clear and measurable.

Focus the Evaluation Design

Consider details like what type of data to collect, when to collect it, where the data will come from, who will participate, and whether a comparison group will be needed. The answers to these questions will inform your evaluation design. There are many different options for evaluation design. These include:

  • Experimental Designs – These designs randomly assign participants to either a group that participates in the activity or a comparison group (where participants do not participate in the activity at all, or participate in the activity after a delay). The purpose of an experimental design is to show a causal relationship between the activity and the outcomes.
  • Quasi-experimental Designs – Often, it is not possible to randomly assign participants to an experimental or control group. In quasi-experimental designs, participants either self-select into your activity, or they are assigned to groups in a non-random way. Quasi-experimental designs are often more feasible to carry out than experimental designs.
  • Non-experimental Designs – When it is not possible to have a comparison group, use a non-experimental design. This design can include case studies where the experience of a single person, group, or community is described. Non-experimental studies might also measure outcomes after an activity (or both before and after an activity) to assess for changes. In non-experimental designs, it is not possible to definitively attribute outcomes to the activity because there is no comparison group.

When deciding on an evaluation design, also consider factors like the cost of the evaluation, the capacity of your evaluation team to implement the design, and the level of scientific rigor expected by your stakeholders.

Identify Your Measures

The measures you select and how you will collect them will depend on your evaluation design and a host of other decisions. To help identify which measures to use, consider the following questions:

  • What type(s) of data are you looking for? Are you looking for rich contextual descriptions using words, or are you looking for numbers?
  • What medium do you need to use to collect your measures? For example, will you use paper and pencil? Online data collection? Video?
  • What types of data sources will you have available? Will you be able to ask participants questions directly through a survey or interview? Will you use observations? Will you use existing data sources?
  • Who will provide you with your data? Will this be your team? Those who administered the program? The participants? An outside observer?

Why Use an Evaluation Specialist?

Consider bringing an evaluator onto your team as early as possible, particularly if your existing team does not have the capacity to conduct the evaluation design that you’ve selected, or if you think you might need help with a specific component of your evaluation, like data collection or analysis. A trained evaluator can save you time and money by making sure that your evaluation is headed in the right direction with an appropriate design and high-quality measures. 

If you do not have an evaluator on staff and cannot afford to hire one, you may be able to collaborate with a faculty member or graduate student from a local college or university at no charge or for a reduced fee.

Resources:

  1. Center for Disease Control and Prevention. (n.d). A framework for program evaluation. Retrieved from https://www.cdc.gov/eval/framework/index.htm

Recommended Resources

RAND Suicide Prevention Program Evaluation Toolkit

This toolkit is designed to help program staff overcome common challenges to evaluating and planning improvements to their programs.