STAGE 6. Defining the evaluation plan

What steps should be completed?

Step 1: Select the type of evaluation

DESIGNING THE OUTCOME EVALUATION

The minimum requirements for evaluating intervention outcomes are that it is implemented, and that at least one outcome measurement is carried out after the intervention.1 The design of the evaluation determines the degree of certainty with which the outcomes can be attributed to the intervention. The fundamental differences between the different designs can be found in their characteristics and conditions of application, and in their explanatory power.2 The following table shows the characteristics of the basic evaluation designs and their ability to explain the efficacy of an intervention. The three most traditional ones are presented, but they are not the only ones and the characteristics shown are not exclusive to them:

 

DESIGN TYPE APPLICATION FEATURES AND REQUIREMENTS EXPLANATORY POWER

NON-EXPERIMENTAL 

  • There is no control group.
  • It requires a measurement before and after the intervention.
  • The effect of the change is measured by comparing the outcomes obtained by the intervention with those that were achieved in the initial situation.
  • It requires little time and resources.
  • Its power to explain the change increases when less time passes between measurements before and after the intervention, as this reduces the confusion variables.
  • It requires specific statistical analysis according to the type of variable evaluated: continuous, categorical, etc.
Low. Allows changes in the target population to be measured before and after the intervention, but does not offer guarantees that these are caused by the intervention.

QUASI-EXPERIMENTAL

  • There is a control group.
  • It requires "before" and "after" measurements in the control group and the intervention (or experimental) group.
  • Individuals are assigned to the control group or the intervention group in the most convenient way, not at random, so it is difficult to ensure that both are equivalent and comparable. 
  • The effect of the change is estimated by the difference between post-intervention outcomes in the intervention group and in the control group
  • It requires more experience, dedication and financial and technical resources than the previous design.
  • It requires analysis with complex statistical models (approaches) that control the confusion variables.
Medium. It enables the changes occurring in the target group to be attributed to the intervention, provided both groups (control and intervention) are equivalent (which is not guaranteed by this design).
EXPERIMENTAL
  • There is a control group.
  • It requires a "before" and "after" measurement in the intervention and control groups.
  • Individuals are assigned to the control group or the intervention group randomly, so they are assumed to be equivalent. The composition of the sample also meets certain requirements, such as being large enough to randomly distribute confusion variables in both groups the possible.
  • The effect of the intervention is obtained by calculating the difference in the change between the measurements made before and after the intervention in the control group and the intervention group.
  • It requires more experience, dedication and financial and technical resources than the previous designs.
  • Requires analysis with complex statistical models (approaches) that control the confusion variables.
  • The random allocation of individuals to the intervention or control groups may pose ethical questions.
High. It allows the changes that have occurred in the group that has received the intervention to be attributed to the intervention.

The key to a good outcome evaluation is in the design used, so it should allow alternative explanations other than the causal attribution of the outcomes to be minimised as much as possible. When a design does this, we say that it has good internal validity. The mechanisms that provide internal validity are3: a) that, in addition to the population group receiving the intervention (IG), there is a control group (CG) that does not receive it but serves as a comparison +  and, b) that assignment of individuals to IG or CG is random. +  When an intervention has internal validity, it means that it provides assurances about its efficacy. Many interventions are intended to achieve their objectives in the whole population with similar characteristics to those of the population being studied, so working with representative samples. + However, interventions in community contexts often have difficulties obtaining quality outcome assessments, due to their cost and/or difficulty finding an equivalent control group. These obstacles can be overcome using other types of designs. +

It should be noted that interventions are sometimes assessed using only measurements of indicators of post-intervention outcomes, so we can imagine that, after implementing an intervention to change favourable attitudes towards drug use, a questionnaire is given to participants to try to determine whether the intervention has altered their attitudes. This evaluation uses outcome indicators (self-declared and indirect) that provide valuable information that reveals the participants' perception of what changes they have made due to the intervention, but it is low quality information in terms of outcomes evaluation because it does not allow the efficacy of the intervention to be evaluated. Therefore, collecting information about outcome indicators does not necessarily involve the outcomes of the intervention being evaluated. To do this, an outcomes evaluation design must be used and different types of indicators can be used to evaluate the outcomes. +

 

References:

1 Nebot M, López MJ, Ariza C, et al. (2011).Evaluación de la efectividad en salud pública: fundamentos conceptuales y metodológicos [Evaluation of effectiveness in public health: basic conepts and methodologies]. Gaceta Sanitaria. 25 (Supl.1): 3-8

2 López MJ, Marí-Dell’Olmo M, Pérez-Giménez A & Nebot M. (2011). Diseños evaluativos en salud pública: aspectos metodológicos [Evaluation design in public health: methodological factors]. Gaceta Sanitaria. 25 (Supl.1): 9-16

3 Alvira F. (2000). Manual para la elaboración y evaluación de programas de prevención del abuso de drogas [Guide to designing and evaluating drug abuse prevention programmes]. Madrid: Agencia Antidroga de la Comunidad de Madrid