STAGE 6. Defining the evaluation plan
What steps should be completed?
Step 1: Select the type of evaluation
The minimum requirements for evaluating intervention outcomes are that it is implemented, and that at least one outcome measurement is carried out after the intervention.1 The design of the evaluation determines the degree of certainty with which the outcomes can be attributed to the intervention. The fundamental differences between the different designs can be found in their characteristics and conditions of application, and in their explanatory power.2 The following table shows the characteristics of the basic evaluation designs and their ability to explain the efficacy of an intervention. The three most traditional ones are presented, but they are not the only ones and the characteristics shown are not exclusive to them:
DESIGN TYPE | APPLICATION FEATURES AND REQUIREMENTS | EXPLANATORY POWER |
NON-EXPERIMENTAL |
|
Low. Allows changes in the target population to be measured before and after the intervention, but does not offer guarantees that these are caused by the intervention. |
QUASI-EXPERIMENTAL |
|
Medium. It enables the changes occurring in the target group to be attributed to the intervention, provided both groups (control and intervention) are equivalent (which is not guaranteed by this design). |
EXPERIMENTAL |
|
High. It allows the changes that have occurred in the group that has received the intervention to be attributed to the intervention. |
The key to a good outcome evaluation is in the design used, so it should allow alternative explanations other than the causal attribution of the outcomes to be minimised as much as possible. When a design does this, we say that it has good internal validity. The mechanisms that provide internal validity are3: a) that, in addition to the population group receiving the intervention (IG), there is a control group (CG) that does not receive it but serves as a comparison + and, b) that assignment of individuals to IG or CG is random. + When an intervention has internal validity, it means that it provides assurances about its efficacy. Many interventions are intended to achieve their objectives in the whole population with similar characteristics to those of the population being studied, so working with representative samples. + However, interventions in community contexts often have difficulties obtaining quality outcome assessments, due to their cost and/or difficulty finding an equivalent control group. These obstacles can be overcome using other types of designs. +
It should be noted that interventions are sometimes assessed using only measurements of indicators of post-intervention outcomes, so we can imagine that, after implementing an intervention to change favourable attitudes towards drug use, a questionnaire is given to participants to try to determine whether the intervention has altered their attitudes. This evaluation uses outcome indicators (self-declared and indirect) that provide valuable information that reveals the participants' perception of what changes they have made due to the intervention, but it is low quality information in terms of outcomes evaluation because it does not allow the efficacy of the intervention to be evaluated. Therefore, collecting information about outcome indicators does not necessarily involve the outcomes of the intervention being evaluated. To do this, an outcomes evaluation design must be used and different types of indicators can be used to evaluate the outcomes. +
References:
1 Nebot M, López MJ, Ariza C, et al. (2011).Evaluación de la efectividad en salud pública: fundamentos conceptuales y metodológicos [Evaluation of effectiveness in public health: basic conepts and methodologies]. Gaceta Sanitaria. 25 (Supl.1): 3-8
2 López MJ, Marí-Dell’Olmo M, Pérez-Giménez A & Nebot M. (2011). Diseños evaluativos en salud pública: aspectos metodológicos [Evaluation design in public health: methodological factors]. Gaceta Sanitaria. 25 (Supl.1): 9-16
3 Alvira F. (2000). Manual para la elaboración y evaluación de programas de prevención del abuso de drogas [Guide to designing and evaluating drug abuse prevention programmes]. Madrid: Agencia Antidroga de la Comunidad de Madrid
© COPOLAD. Cooperation Programme between Latin America, the Caribbean and the European Union on Drugs Policies.