Evaluation in health promotion

Chapter 20 Evaluation in health promotion





Overview


Evaluation is an integral aspect of all planned activities, enabling an assessment of the value or worth of an intervention. Evaluation also performs several other roles. For practitioners, evaluation helps develop their skills and competences. For funders evaluation demonstrates where resources can be most usefully channelled. For lay people, evaluation provides an opportunity to have their voice heard. There are additional reasons why evaluating health promotion is a key aspect of practice. As a relatively new discipline there is great pressure on health promotion to prove its worth through evaluation of its activities. In addition, the drive in the National Health Service (NHS) to ensure that all practice is evidence-based affects health promotion as well as more clinical activities. In a situation where resources will always be limited, demonstrating the cost-effectiveness of interventions is important. There are thus many factors leading to a demand for evaluation of health promotion practice.


Evaluating health promotion is not a straightforward task. Health promotion interventions often involve different kinds of activities, a long timescale, and several partners who may each have their own objectives. Health promotion is still seen as belonging within the health services, where the dominant evaluation model is quantitative research centred on experimental trials, with randomized controlled trials (RCTs) as the preferred evaluation tool. Health promotion has had to argue its case for a more holistic evaluation strategy encompassing qualitative methodologies and taking into account contextual features.


The focus of this chapter is on evaluating health promotion interventions. Evaluation of research studies is also part of the health promoter’s role and remit, and readers are referred to Chapters 2 and 3 in our companion volume (Naidoo & Wills 2005) for a detailed discussion of this topic. This chapter considers what is meant by evaluation, the range of research methodologies used in evaluation studies, its rationale, how it is done and the role of evaluation in building the evidence base for health promotion.



Defining evaluation


Evaluation is a complex concept with many definitions that vary according to purpose, disciplinary boundaries and values. A comprehensive definition of evaluation is: ‘the systematic examination and assessment of features of a programme or other intervention in order to produce knowledge that different stakeholders can use for a variety of purposes’ (Rootman et al 2001, p. 26). The above definition is useful because it also flags up the importance of the purpose of evaluation, and the fact that there can be many different reasons to evaluate. Evaluation can provide information on the extent to which an intervention met its aims and goals, the manner in which the intervention was carried out, and the cost-effectiveness of the intervention. It is important to be clear at the outset about the purpose of evaluation as this will determine what information is gathered and how information is obtained. The value-driven purpose of evaluation distinguishes it from research (Springett 2001). Evaluation uses resources which might otherwise be used for programme planning and implementation, so a clear purpose is also necessary in order to legitimate and protect this use of resources.



From a practitioner’s perspective, evaluation is needed to assess results, determine whether objectives have been met and find out if the methods used were appropriate and efficient. These findings can then be fed back into the planning process in order to progress practice. Evaluations of interventions are used to build an evidence base of what works, enabling other practitioners to focus their inputs where they will have most effect. From a lay perspective, evaluation helps to clarify expectations and assess the extent to which these have been met. Evaluation may also help determine what strategies had most impact, and why. Without evaluation, it is very difficult to make a reasoned case for more resources or expansion of an intervention. Even when a programme is rolling out an established and effective intervention, specific local features may have an unanticipated impact that will only become apparent in an evaluation. There are sound reasons for evaluating all interventions, although more innovative projects will require more substantial and costly evaluation.




Evaluation covers many different activities undertaken with varying degrees of rigour or reflectiveness. At its simplest level, evaluation describes what any competent practitioner does as a matter of course, that is, the process of appraising and assessing work activities. This includes the process of informal feedback or more systematic review of health promotion interventions. In the example above, noting how the sessions have been received by the men, or soliciting their comments, or those of peers and colleagues, is part of the evaluation process. Evaluation is often used to refer to a more formal or systematic activity, where assessment is linked to original intentions and is fed back into the planning process. For the well-man clinic example, this might involve monitoring vital statistics and doing a before-and-after study of lifestyle behaviours. Health promotion evaluation should integrate core health promotion principles such as equity and participation into the evaluation process. In the well-man clinic example this might be achieved through asking the men what they wanted from participating in the programme and whether they achieved their goals. Comparing the socioeconomic status of participants and non-participants would help determine if the programme was reinforcing or challenging social and health inequalities.



Evaluation research methodologies




Evaluation is often more formally conducted as research using a variety of different methods. The classic scientific method of proof, the experiment, relies on controlling all factors apart from the one being studied and can best be achieved under laboratory conditions. However, this is clearly impossible and unethical to achieve where people’s health is concerned. The RCT is the next most rigorous scientific method of proof. The RCT involves randomly allocating people to an intervention or control group. Random allocation means that the two groups should be matched in terms of factors such as age, gender and social class which are all known to affect health. Any changes detected in the intervention group are then compared to those found amongst the control group. Those changes which occur in the intervention but not the control group can then be attributed to the health promotion programme.


In the well-man clinic example in Activity 20.3 an RCT study would involve randomly allocating all men in the target group to either the intervention group (invited for screening) or the control group (not invited). The two groups would then be compared after the intervention had taken place. If the intervention group showed statistically significant improvements in health status or health-related behaviour over and above those recorded for the control group the intervention would be deemed to be effective.


The degree of scientific rigour necessary to conduct an RCT is hard to achieve in real-life situations. Most health promotion programmes have spin-off effects and indeed are designed to do so. It is impossible to isolate different groups of people or to ensure that programmes do not ‘leak’ beyond their set boundaries. However, the RCT design does mean that changes detected in the input group may be ascribed to the health promotion programme with a greater degree of confidence.


Evaluation research may also use qualitative methods to focus on understanding the processes involved in change. This kind of evaluation provides details on what is happening in interventions and which features have been effective. This is achieved through the use of qualitative methodologies and methods, and the case study is one example of this approach. The health promotion intervention is the ‘case’ that is intensively studied using a variety of methods. This enables the evaluator to get a detailed picture of how the intervention has affected the people involved. Case studies are typically small-scale and findings are expressed in descriptive rather than numerical terms. Each case study is unique and findings cannot be generalized to other situations. Its strength as a method is that there is a high degree of confidence that identified effects are real and result from the programme.


In the well-man clinic example in Activity 20.3, a qualitative case-study approach might involve indepth interviews with a sample of men who took up the screening opportunity and the practice nurse. The interviews would aim to explore what motivated the men to accept the invitation to attend the clinic, how they found the experience and how (if at all) it has affected them.


Both the RCT and the case study are valid methods which can be used to isolate the effects of health promotion interventions. There are also many other methods that lie between these two extremes, e.g. surveys which aim to identify significant trends. In practice methods often overlap or are combined. The RCT fits into a scientific, quantitative medical model of proof, has higher status and is generally regarded as more respectable and credible than the case study.



Why evaluate?


Evaluation uses resources that could otherwise be used to provide services. Given that services are always in demand, there needs to be a strong rationale for devoting resources to evaluation rather than service provision. New or pilot interventions warrant a rigorous evaluation because, without evidence of their effectiveness or efficiency, it is difficult to argue that they should become established work practices. Other criteria that can be used to determine if evaluation is worth the effort relate to how well it can be done. If it will be impossible to obtain cooperation from the different groups involved in the activity, it is probably not worthwhile trying to evaluate. If evaluation has not been considered at the outset but is tacked on as an afterthought, the chances are that it will be so partial and biased as to be not worth the effort.


Evaluation is only worthwhile if it will make a difference. This means that the results of the evaluation need to be interpreted and fed back to the relevant audiences in an accessible form. All too often, evaluations are buried in inappropriate formats. Work reports may go no further than the manager, or academic studies full of jargon may be published in little-known journals.



Results of evaluation studies will be relevant to many different groups and it may be necessary to reproduce findings in different ways in order to reach all these groups.




What to evaluate?


Health promotion objectives may be about individual changes, service use or changes in the environment. Example 20.7 shows the range of possible objectives associated with smoking reduction interventions, each of which would need evaluation.



Although all these factors relate to health, they are quite separate, and there is no necessary connection between, say, increased knowledge and behaviour change. It is therefore inappropriate to evaluate a given objective (e.g. increased physical activity) by measuring other aspects of an intervention (e.g. number of leaflets taken at a health fair or number of people reporting that they would like to exercise more). It is important to choose appropriate indicators for the stated objectives. This issue is discussed further in Chapter 19, where the log-frame model and the use of logic to select appropriate indicators are considered.



Process, impact and outcome evaluation


Evaluation is always incomplete. It is not possible to assess every element of an intervention. Instead, decisions are taken about which evaluation criteria to prioritize and also sometimes which objectives are to be assessed. A distinction is often made between process, impact and outcome evaluation. Process evaluation (also called formative or illuminative evaluation) is concerned with assessing the process of programme implementation. Outcomes can be immediate (impacts), intermediate or long-term (outcomes). Impact and outcome evaluation are both concerned with assessing the effects of interventions.



The following criteria have been proposed to guide evaluation in public health (Phillips et al 1994, cited in Douglas et al 2007):







Stay updated, free articles. Join our Telegram channel

Mar 21, 2017 | Posted by in MEDICAL ASSISSTANT | Comments Off on Evaluation in health promotion

Full access? Get Clinical Tree

Get Clinical Tree app for offline access