STAGE 6. Defining the evaluation plan

What steps should be completed?

Step 1: Select the type of evaluation

OUTCOME EVALUATION INDICATORS

Outcome indicators provide information on the changes that the intervention has produced in the target population. These indicators must be coherent with the objectives as the information to be collected to reveal whether the intervention has worked depends on the objectives. The answers to the following questions are a guide to selecting, defining and managing objectives: 

What? | Drug demand reduction interventions are diverse, as are the outcome indicators to measure their effects. There are many standardised outcome indicators, and it is usually advisable to use these rather than to generate new indicators for each intervention. This makes it possible to use the knowledge and procedures available to obtain, analyse and interpret these indicators and compare the outcome of the intervention with other studies that have used similar indicators. A good source of outcome indicators is the Bank of Evaluation Instruments at the European Observatory on Drugs and Drug Addiction:

When selecting outcome indicators, it is useful to consider the information that each one can provide, the procedures and instruments required to collect data on them and the tools needed to analyse and interpret the information. However, if it should be necessary to generate new indicators, the fact that the outcome indicators must be consistent with the objectives should be taken into account, in addition to being specific, measurable, adequate, realistic and time-adjusted (they must allow changes in the timeframes defined by the evaluation to be observed).

It is common to use different types of indicators, as it is rare that a single indicator has full capacity to evaluate the objectives of an intervention. Therefore, for example, information from different indicators can be analysed to measure the efficacy of a treatment to cease alcohol consumption, such as the rate of cessation of consumption after treatment, the rate of permanence in treatment and maintenance of the effects a year after finishing the treatment. It is also advisable to use drug use indicators and indirect indicators in the outcome evaluation Indicators such as the prevalence or incidence of use of certain substances, the age of initiating use, intention to use, etc., can be used.  Others, such as the prevalence of mental or organic disorders that may be related to drug use (e.g., liver cirrhosis), are also useful.

How? | As in the process evaluation, mixed data-collecting methods that incorporate quantitative methods and qualitative methods provide a more complete view of the situation. However, in outcome evaluation, the use of quantitative methods becomes more relevant and it is advisable to use standardised instruments that meet the psychometric properties of validity and reliability.

When? | In the outcome evaluation, at least one post-intervention measurement is required. However, as already mentioned, using only one post-intervention measurement does not provide any guarantees as to the efficacy of the intervention, and it is much more sensible to take a measurement before and at least one other afterwards. However, in certain interventions and depending on their objectives, outcomes can be obtained immediately after the intervention or some time after it has finished. When an intervention is designed, its effects are implicitly hoped to last as long as possible so, in addition to measuring the effects at the end of the intervention, some outcome evaluations repeat the measurements later (usually after 6 months, 12 months and a year later). This reveals the duration of the effects in the short, medium and long term.

Where? | Data to evaluate outcomes are usually collected from the places where the target population is located (intervention scenarios, households, places of work, study or leisure, etc.)

How much? | Establishing references for the changes produced by an intervention helps to estimate the level of achievement of these changes. It is important that this estimate is realistic. These "standards" of reference can often be found in the evidence on the efficacy of the intervention. If they do not exist for a specific type of intervention, similar references can be sought in other social or health areas other than that of drugs (e.g., traffic accidents, sexually transmitted diseases), but if they are not known, it is better to establish conservative references rather than exaggerated ones. For example, the changes sought in the population in preventive interventions can be in behaviours (e.g., drug use) or in intermediary variables that influence these behaviours, like beliefs, attitudes, abilities, motivations, etc. In general, earlier and larger changes are observed in these psychosocial variables than in behavioural variables, as it is usually necessary to carry out interventions sustained in time and evaluations at medium and long term are required to observe changes in behavioural habits.

Who? | The teams that collect, analyse and interpret information from outcome evaluations must be made up of professionals with knowledge and experience in design, instrument use and application of research methods, as well as training in statistical methods and/or qualitative analysis methods. To provide this knowledge, collaboration can be sought with other sectors of the community that have this experience, such as universities. Those responsible for the outcome evaluation may or may not be involved in the design and/or implementation of the intervention. If they are, it is known as internal evaluation; if not, about it is called external evaluation. It is often considered that the latter brings greater credibility to the outcome of the evaluation, as it was carried out by a different team from the one that implemented the intervention, there would be no conflict of interest if the outcome were not as positive as hoped (or if these were clearly negative). The downside to this advantage lies in the fact that an external evaluation usually adds complexity to the evaluation process and involves higher costs than the internal evaluation. It is advisable to take the decision to adopt one kind of evaluation or the other before initiating the evaluative process.