Evaluating Practice

Chapter 7
Evaluating Practice


Karen A. Luker


The University of Manchester, Manchester, UK


Gretl A. McHugh


The University of Leeds, Leeds, UK


Introduction


Evaluation is an important but often neglected component of health care planning and is examined here in the general context of health visiting and public health. Taking into account the developments in evidence-based practice and the interest in evaluation as part of quality improvement (Health Foundation, 2015), health care can be described as the application of best current knowledge in the context of the condition and values of an individual patient or client, family or population – sometimes referred to as evidence-based practice or policy making (Muir Gray, 2004). Health care and health maintenance can be seen to be the concern of the individual, especially in a society where self-care is advocated for the management of a number of long-term conditions (DH, 2005a, 2009a, 2010a; DHSSPS 2015). One of the main challenges facing the health care system is in caring for people with two or more long-term conditions, referred to as ‘multimorbidity’ (DH, 2014a). Some long-term conditions, such as obesity and type 2 diabetes, are preventable, and a number of professional groups, including health visitors, have a responsibility to provide health advice and health care services to individuals, families, and communities to support prevention of these conditions.


Skill mix is a permanent feature of health visiting teams. Teams vary, but comprise both experienced and recently qualified health visitors, as well as health visitors with specialist interests (e.g. in child protection). In addition, other practitioners – including community staff nurses, nursery nurses, and health care assistants – may undertake aspects of the work that was once considered the traditional role of health visitors, such as running baby clinics or home visiting. Whilst acknowledging this heterogeneity within health visiting teams, for simplicity we have tended to use the term ‘health visitor’ in this chapter to refer to all these team different members.


The greater emphasis now placed on the evaluation of health care services stems from the escalating public demand for quality services and lower costs, brought about by increased expectations, technological advances, and service complexity, plus the pressure on public finances (DH, 2005b, 2010a, 2012a; NHS England, 2013). The increased complexity of the service has, in part, been a response to the political pressure to improve the efficiency and quality of the National Health Service (NHS). There has been an increasing trend towards specialisation, initiated by medicine and mirrored in nursing, which has resulted in greater competition for limited resources. The uncertainty generated by the fact that demand exceeds supply, in terms of finance, has meant that doctors and nurses alike are forced to look for verifiable facts to assist them in establishing a convincing case worthy of continued or additional financial support. Historically, most policy decisions in the health care field have simply followed a logical appraisal of the options of the people involved in the decision making and may not have involved an analysis of available data. In order to justify society’s continued support and commitment to health care, it is necessary to demonstrate effectiveness. Over the past several years, there has been an increase in funding for effectiveness research and for research that seeks to identify whether data on effectiveness exist (Cowley et al., 2013).


In this chapter, we present the key sources of evidence available to health visitors in evaluating their practice, including the different types of evaluation and suggested ways to approach evaluation, such as the ‘care planning process’ and target setting using the ‘SMART’ approach. In addition, we provide examples of evaluation of practice and optional activities.


Sources of evidence for practice


The National Institute for Health and Care Excellence (NICE) is the expert organisation which evaluates new and existing medicines, treatments, and procedures in the UK and recommends or does not recommend their adoption by the NHS in England (via the devolved administrations) in Wales, Scotland, and Northern Ireland. It also provides evidence-based guidance on the treatment and care of people with specific diseases and conditions (see www.nice.org.uk). Clinical guidelines of relevance to health visiting practice concerning the postnatal care of women and their babies were developed by the National Collaborating Centre for Primary Care (NCCPC) on behalf of NICE (NCCPC, 2006), with an addendum regarding sudden infant death syndrome in 2014 (NICE, 2014a). There has also been a considerable amount of guidance from NICE around public health, with several of its published guidelines of relevance to health visitors, such as Community Engagement (NICE, 2008a), Maternal and Child Nutrition (NICE, 2008b), Reducing the Differences in the Uptake of Immunisations (NICE, 2009a) and Maintaining a Healthy Weight and Preventing Excess Weight Gain among Adults and Children (NICE, 2015). Joint publications from NICE and the Social Care Institute for Excellence (SCIE; see www.scie.org.uk), such as Looked-After Children and Young People (NICE & SCIE, 2010a) and Strategies to Prevent Unintentional Injuries among Children and Young People Aged Under 15 (NICE & SCIE, 2010b) also provide relevant guidance to health visitors in their practice. Another guideline, which extends into the domain of health visiting, relates to helping people to change their behaviour at the population, community, and individual levels (NICE, 2014b). NICE also offers health professionals guidance on how to implement changes in practice, which is often difficult. In addition, some of the guidelines have ‘audit guidance’ on achieving improved standards; for example, improving immunisation targets (NICE, 2009b). New topics in the area of public health and social care are emerging on a regular basis. NICE has also produced guidance specifically about health visiting as a local government briefing for local authorities and other agencies (NICE, 2014c). It is clearly important for health and social care practitioners to regularly check the NICE and SCIE websites in order to keep up to date with evidence-based practice guidance. It might be useful to locate some of the existing guidance of relevance to health visitors; this can be explored by completing Activity 7.1.


Since 2009, NHS Evidence (see www.evidence.nhs.uk) has provided individuals working in health and social care with access to information, guidelines, and NHS policy with the aim of improving the quality of care for patients and service users. The NHS Improving Quality initiative, established in 2013 (www.nhsiq.nhs.uk), is driving the agenda on quality. In Scotland, the NHS quality-improvement hub (www.qihub.scot.nhs.uk) includes a knowledge network (www.knowledge.scot.nhs.uk) which provides evidence and community tools to staff working in health and social care, along with a portal that enables practitioners to search for the evidence for use in practice (www.evidenceintopractice.scot.nhs.uk). This is all part of Scotland’s quality-improvement strategy for health care.


Another good source of evidence is Effectiveness Matters, produced by the Centre for Reviews and Dissemination (CRD), which provides information on the development and promotion of evidence-based care, including health promotion interventions, in terms of what was and was not effective in the past (see http://www.york.ac.uk/crd/publications/effectiveness-matters/). There are also evidence briefings produced for commissioners of services, such as guidance on supporting women with postnatal depression through psychological therapies (http://www.york.ac.uk/crd/publications/evidence-briefings/). CRD used also to publish the Effective Health Care Bulletin, which included summaries of systematic reviews and syntheses of the research evidence on health care interventions. This ceased publication in 2004, but the archives are still available online (http://www.york.ac.uk/crd/publications/archive/). The CRD continues to publish high-quality systematic reviews that evaluate the effects of health and social care interventions (see http://www.york.ac.uk/crd/), including from the Health Technology Assessment programme (HTA), which provides completed and ongoing health technology assessments (see http://www.crd.york.ac.uk/crdweb/). HTA reports are also available from the National Institute for Health Research (NIHR) Journals Library (http://www.journalslibrary.nihr.ac.uk/hta). The Database of Abstracts of Reviews of Effects (DARE), which included abstracts of systematic reviews, quality assessment of reviews, and details of Cochrane reviews and protocols, and the NHS Economic Evaluation Database (EED) were previously part of the CRD; these ceased in 2015 but can still be accessed online. An NHS Dissemination Centre is now established, which focuses on the dissemination of evidence in health and social care (www.dc.nihr.ac.uk). These are all important sources of evidence for health visitors, and they can be explored further using Activity 7.2.


There is very little evidence concerning the effectiveness of health visiting interventions (discussed later and in Chapter 3) when compared to interventions provided by other health professionals. For example, for some activities (e.g. health promotion interventions), a national scoping exercise of the contribution of nurses, midwives, and health visitors to child health and child health services found very little outcome data demonstrating that these interventions had sustained effects on negative health behaviours or other health outcomes (While et al., 2005). A more recent narrative review examined the literature on key health visitor interventions for children and families (Cowley et al., 2013) and found some evidence of beneficial outcomes from health visiting practice (in particular, structured home visiting and early interventions), but the changes were small. In addition, there have been a number of systematic reviews around health visiting practice which have found conflicting evidence concerning the interventions delivered by health visitors, public health nurses, and other community health workers (Elkan et al., 2000; Ciliska et al., 2001; Shaw et al., 2006).


The only systematic review conducted of the effectiveness of health visiting was over 15 years ago (Elkan et al., 2000). At that time, there was a limited amount of evidence supporting some of the interventions which health visitors undertook. However, there was positive evidence for home visiting to support parents in improving breastfeeding and immunisation rates. Other, later systematic reviews focused more on home visiting programmes, with one reporting that within the scope of public health nurses in Canada, visiting pre- and postnatal clients in their own homes could produce significant benefits, especially with interventions of high intensity and clients considered at ‘risk’ (Ciliska et al., 2001). An analysis of nine systematic reviews on home visiting programmes found some evidence to support home visiting in the antenatal and postnatal periods, namely in reducing rates of childhood injury and in the identification and management of postnatal depression (Bull et al., 2004). In a more recent meta-ethnographic systematic review of four studies examining programmes for parents of children with behavioural difficulties, two of the studies involved health visitors delivering the programme (Kane et al., 2007). The recently published early evaluation of the effectiveness of the Family Nurse Partnership (FNP) programme (discussed in Chapter 4) is an example of a pragmatic randomised controlled trial (RCT), with women assigned to receiving FNP compared to usual care, which enabled an evaluation of whether the provision of this programme should be continued (Robling et al., 2015).


There is variability by employing authority even within England, despite the NHS Core Service Specification (NHS England, 2014a) delineating what health visitors do, who they visit, and the content of interactions. However, home visiting is only one aspect of their work, and there is a need to examine further the role and function of health visitors in the UK.


For over 25 years – first highlighted in the 1990 National Health Service and Social Care Act, now superseded by the Health and Social Care Act 2012, which came into operation on 1 April 2013 – evaluation has been firmly on every professional’s agenda, but unfortunately it has been a very neglected area of study, especially in community nursing services and health visiting. According to the Queen’s Nursing Institute (QNI, 2008), there are a number of reasons why community nurses and health visitors need to be skilled in evaluating their services or changes to their services, namely:



As registered professionals, they have a duty to provide the best possible care, and to do this they need to know the impact of their work on those it’s intended to benefit. The increase in competition to provide services means that professionals need to be able to show how and why their service, or new way of working is effective.


(QNI, 2008: 1)


The Nursing and Midwifery Council (NMC) has standards of proficiency for specialist community public health nurses (SCPHNs), several of which focus on the need for health visitors to be involved in evaluation (NMC, 2004); these standards have not been superseded. They are based on 10 key public health principles (Skills for Health, 2008) and the four domains of health visiting (CETHV, 1977; Twinn & Cowley, 1992). There are three public health principles, which focus on evaluation within health visiting practice, covering two domains in health visiting (see Table 7.1). In Wales, the nursing and midwifery strategy has a focus on research and development, with one specific aim of reviewing practice through audit and service evaluation (Public Health Wales NHS Trust, 2014). NICE’s (2014c) guidance to those commissioning health visiting services focuses on what health visitors can achieve: (i) building resilience and reducing costs in later life; (ii) identifying families with additional needs and providing support; (iii) improving wider factors that affect health and well being; (iv) reducing numbers of children dying prematurely and living with preventable harm and ill health; and (v) supporting people to live healthy lifestyles and make healthy choices.


Table 7.1 NMC standards of proficiency for entry on to register




















Principle Domain: Influence on policies

affecting health
Developing health programmes and services and reducing inequalities Work with others to plan, implement, and evaluate programmes and projects to improve health and well being
Identify and evaluate service provision and support networks for individuals, families, and groups in the local area or setting
Research and development to improve health and well being Develop, implement, evaluate, and improve practice on the basis of research, evidence, and evaluation
Principle Domain: Facilitation of health-enhancing activities
Strategic leadership for health and well being Apply leadership skills and manage projects to improve health and well being
Plan, deliver, and evaluate programmes to improve the health and well being of individuals and groups

Source: NMC (2004: 11–12)


The mere recognition of evaluation as a neglected area of practice will not be sufficient to promote the activity, however. A better understanding of what evaluation might entail is discussed in the next section; this may encourage health visitors to focus on this element of their work.


Evaluation – the problem of definition


The word ‘evaluation’ is widely used, and for the most part its meaning is taken for granted. Few attempts have been made to formulate a conceptually rigorous definition of ‘evaluation’ or to analyse the meanings behind its use. The lack of a clear definition has meant that the word ‘evaluation’ is used interchangeably with other terms, such as ‘assessment’, ‘appraisal’, and sometimes ‘audit’. We talk of ‘assessment’ or ‘evaluation’ in the context of client or community needs, and indeed assessment is said to be the first stage in care planning and evaluation the last. We hear nurse managers talk of staff appraisal, performance and development reviews, and evaluation, and this relates to how well individual practitioners are functioning in their particular role. It is evident, then, that confusion may arise if we use the word ‘evaluation’ in a casual way. In addition, there may be confusion about the difference between ‘evaluation’, ‘research’, and ‘audit’ (QNI, 2008). Educational toolkits are available to assist practitioners in understanding the differences between research, audit, and service evaluations (Brain et al., 2011).


Taking into account the common usage of the term ‘evaluation’, there is a distinction to be made between evaluation of everyday practice by a practitioner and evaluation research, such as the evaluation of pilot schemes:



  • a pilot evaluation of the text4baby mobile health program (Evans et al., 2012);
  • evaluation of a public health nurse visiting programme for pregnant and parenting teens (Schaffer et al., 2012);
  • a mixed-methods study evaluating health visitor assessment of mother–infant interactions (Appleton et al., 2013);
  • an RCT of a guided Internet behavioural activation treatment: Netmums Helping with Depression (O’Mahen et al., 2013, 2014).

‘Evaluation’, when used in a general way, tends to refer to the everyday occurrence of making judgments of worth. Although this interpretation implies some form of logical or rational thought, it does not presuppose any systematic procedures for presenting objective evidence to support the judgment. When used in this way, ‘evaluation’ refers only to the process of assessment or appraisal of worth. The Health Foundation (2015) discusses evaluation as the capturing of insight which might be lost, with the generation of new knowledge so that lessons can be learned.


Evaluation has been defined by St Leger et al. as:



The critical assessment, on as objective a basis as possible, of the degree to which entire services or their component parts (e.g. diagnostic tests, treatments, caring procedures) fulfil stated goals.


(St Leger et al., 1992: 1)


Two important elements of this definition are highlighted by the authors: first, the reference to goals, which explicitly requires a comparison with some standard, and, second, the importance of objectivity, which ensures the findings of the evaluative process are independent of judgments or prejudices on the part of those undertaking and commissioning the evaluation.


Evaluative research, on the other hand, implies the utilisation of scientific methods and techniques for the purpose of making an evaluation. Inherent in the term ‘evaluative research’ is an emphasis on the measurement of change and the generation of new knowledge. The distinction made between ‘evaluation’ and ‘evaluative research’ may seem irrelevant or daunting to some practitioners who consider that they will never become involved in research as a primary activity, but, nevertheless, practitioners may be involved in generating research questions or become engaged in data collection for others. Many research questions are developed from an observation in practice. However, in their everyday work, health visitors are constantly involved in making judgments of worth. For example, if we simply say that we believe that visits to families with a disabled child are a good idea, this is an unsubstantiated judgment. If, on the other hand, we say that visits to families with a disabled child are good because they reduce the incidence of loneliness and depression in the mother, this may be a substantiated judgment (i.e. based on evidence of the impact of visits on loneliness and depression). If we have some insight into the criteria used as the basis of the statement, this does not make it research, but it does imply that we have some evidence to support our position, and may suggest possible outcome measures to be used in measuring the impact of home visits and in research.


In the past 10 years, a great deal has been written about the importance of being able to measure the outcomes of care. This has largely been driven by the evidence-based practice movement and the international financial situation. There is a desire to measure the effectiveness or adequacy of care (DH, 2008a). The Darzi Report (DH, 2008a) emphasised the need to improve effectiveness of care throughout the patient journey, with personal care, quality, and safety high on the agenda. The Commissioning for Quality and Innovation (CQUINS) is in place to improve services for people, and nurses are a part of ensuring services and targets are being met (NHS England, 2014b; RCN, 2012). In addition, there have been developments in the NHS in the use of patient-related outcome measures (PROMS) to measure the effectiveness of care (DH, 2008b). Policy has highlighted the need to focus on improving health care outcomes, with a more widespread use of PROMS where available, and on learning from patient/client experience surveys and real-time feedback (DH, 2010a). There has been more focus on experiential elements within the NHS patient experience framework, which includes such essential components of the patient experience as integration and coordination of care (DH 2012b). The important insights provided by patient/client experiences of health care and information collected on what is good and what could be improved with regard to services ultimately assists in forming judgments about performance and accountability. Before reading on, take some time to consider your own practice and begin to complete Activity 7.3.


Conceptualising evaluation


There are different types of evaluation. The Health Foundation (2015) provides an overview of four main ones:[



  • Summative: usually carried out at the end of the intervention when data is available to determine whether the goals or objectives have been met and any improvements and how the benefits compared to the costs.
  • Formative: enables us to shape an intervention and used as the intervention is evolving, so it is useful to take into account any changes that need to be made before implementation.
  • Rapid Cycle: an example of a formative evaluation which helps with determining if an intervention is effective, but goals are often fixed and enable improvements in the intervention to be made. Often used for large-scale changes for example a new service.
  • Development: an example of a formative evaluation but goals can often be changed and enables real-time feedback to the intervention team, often involving close working with those doing the evaluation.

(Health Foundation, 2015)


In conceptualising the various approaches to evaluation, the goal-attainment model stands out as being particularly appealing to those involved in the evaluation of health care. The notion of goal attainment is embodied in the target-setting approach adopted by many Trusts; two good examples are the immunisation and cervical screening targets set as goals for general practices and the targets set in the health visitor service specification. Target setting is also used in identifying service priorities. Targets should be specific, measurable, achievable, realistic, and time-bound, summarised in the mnemonic ‘SMART’. This framework was developed by Doran (1981) as a management tool and has been used widely in health care and in health care education (Bovend’Eerdt et al., 2009; Sidhu et al., 2015).


The starting point for any evaluation is to be clear about the aims of the service; that is, the benefits expected to be accrued by the service’s recipients. There is general agreement amongst those concerned with evaluation that the most important – and yet most difficult – phase is the clarification of goals and objectives. The emphasis on goals and objectives stems from a conceptualisation of evaluation as a measurement of the success or failure of an activity insofar as it reaches its predetermined objectives. One way to avoid this conundrum is to evaluate the service from the perspective of the service user or client. This is a more open-ended approach to evaluation, which seeks to capture the experience of the service user in terms of the perceived benefits and disbenefits of the service. An example of this approach is Russell & Drennan’s (2007) Web-based study of 4665 mothers’ views of the health visiting service in the UK. Using Netmums (www.netmums.com), an online parenting organisation that currently has 1.7 million users (mostly mothers), Russell & Drennan conducted a survey over concern that the health visiting service was increasingly difficult to access. They found that mothers valued the health visiting service, particularly health visitors’ knowledge and expertise concerning child development and parenting, but felt that some recent changes to the service, such as a focus on those families most in need, were making health visitors less accessible to them (Russell & Drennan, 2007). Another example is a questionnaire examining parents’ perceptions and experiences of the health visiting service (McHugh & Luker, 2002). Although the service was valued by those who received it, and health visitors provided much needed support and advice, the study highlighted high client expectations, and the need for information based on research and evidence, to allow the service to deliver higher-quality interventions in the community. Such a use of evidence to inform practice and safe health visiting interventions resulted in Tower Hamlets developing an evidence-based toolkit, the priorities of which were developed using a modified Delphi approach: infant stimulation and early speech and language, obesity prevention, and a focus on stressed and unsupported families (Bryar et al., 2013; Barts Health NHS Trust, 2014). The provision of support and guidance in these areas enabled health visitors to focus their practice on these particular health needs of the local population.


A narrative review of service users’ views on health visiting found that a trusting relationship between client and health visitor was key to providing valued support; qualitative data highlighted the positive experiences of health visiting services, but showed that those with negative experiences tended to disengage from the service (Donetto et al., 2013). Organisational features of the health visiting service which caused disruption to the care experience were also seen to be an issue (Donetto et al., 2013). The Department of Health’s (DH, 2011) service vision for health visiting has been to increase the number of health visitors, in order to provide increased support and evidence-based service to all families, rather than just to those considered at-risk. There is a focus on investing in services for children and families around the following areas, which provide for the development of standards that can be used to evaluate health visitors’ impact:[



  • Improving access to services
  • Improving the experience of children & families
  • Improving health and well being outcomes for under fives
  • Reducing health inequalities.

(NHS England, 2014a)


The process of evaluating can be complex and may be open to subjective influences. Different people want to know different things. Evaluation involves a combination of basic assumptions underlying the activity to be evaluated and the personal values of those engaged in the activities being evaluated. Hence, the process of evaluation always starts with recognition of values, which may be either explicit or implicit, before moving on to goal/objective setting and goal measuring, putting goal activity into operation (programme operation), and finally assessing the effect of the goal operation (programme evaluation) (first described by Suchman, 1967).


Example: tackling childhood obesity


With reference to a goal/objective attainment evaluation process, we will explore possible ways of evaluating a health programme. Let us suppose that we are health visitors who wish to tackle the growing problem of obesity in the under 5s. We have observed that there appear to be a number of overweight children attending the child health clinic for their 2-year check. Our observations will reflect our values. We believe that to be overweight in childhood is not good; we may hold different views on why this is undesirable. Some of us may believe that overweight children experience more upper respiratory tract infections, may be more prone to obesity later in life, and are at an increased risk of developing type 2 diabetes. Others may be of the opinion that children with obesity do not look as attractive as those of average or light build, and that this may lead to an internalisation of a negative self-image or bullying by other children. In part, these value judgments reflect the beliefs and values of the society in which we live and work. However, our views may also be based on literature on childhood obesity we have read or on our use of weight-monitoring charts. In any case, in response to this perceived need, we decide to set up a healthy eating club for parents of under 5s. The underlying rationale for this action is that we believe that it would be beneficial for families to learn about healthy eating, and furthermore we believe it would benefit society if the next generation were fit and healthy.


The objectives of the club will probably be wider than just getting parents to understand about healthy eating in childhood, although this will, of course, be the major focus. First, we may have to convince the parents of the importance of healthy eating to a child’s healthy weight and that having an overweight child is undesirable and a potential threat to future health and well being. Second, as health visitors, we see ourselves as health promoters and facilitators, and it is likely that we will use the healthy eating club to teach the parents of children under 5 about the nutritional value of food and the value of a balanced diet. We may even go so far as to teach family members how to cook nutritious food. There are plenty of resources on this, such as those produced by First Steps Nutrition Trust (www.firststepsnutrition.org), which can help provide the informational basis of sessions on eating well, fussy eating, portion sizes, or healthy cooking, depending on the needs of the group. There is also scope for individual practitioners to think of additional objectives, which may be more or less important than those already stated.


After identifying our objectives or goals, the next stage of the evaluation process is to clarify how we will determine when a goal has been achieved. Looking back at our objectives, we can identify a number of possible criteria on which we can base our judgment concerning the success or failure of the programme. First, the number of parents who attend the programme, and the frequency of their attendance. This may give us some indication as to whether the programme was seen as useful by the parents. However, we have to be careful in how we interpret these data, because whether parents attend the programme or not depends to some extent on structural factors – we will return to this topic when thinking about programme planning. Second, any increase in the activity levels of children in the programme, or whether children reduced weight gain. In addition, once the club has ended, whether there is sustained healthy eating or whether the child’s target weight is achieved. Unlike most areas of health visiting, we are fortunate in having truly objective criteria in this context. Third, the difference between what parents knew before they attended the healthy eating club and what they know following exposure to our learning materials and teaching sessions on the nutritional value of food. This information may best be obtained by giving the parents a questionnaire to fill in on the first visit and then a follow-up questionnaire after the sessions on nutrition have concluded. This will give us some indication of their knowledge level and some feedback as to whether or not our teaching sessions have been successful.


In order to achieve our goals, the evaluation process suggests that we devise a strategy for setting up our healthy eating club. It is important to work with parents in the planning and implementation of this initiative and to learn from the successes of other, similar initiatives. First, it would be helpful to know how many parents would be interested in coming to the club. We can find out this information by putting a poster in clinics or by informing other health visitors that this initiative is being developed, so that they can inform parents during home visits. The next step is to decide how the programme will be run. Will there, for example, be a formal session every week? Will other health professionals be asked to take part, such as a dietician or a cookery expert? Will there be a session on increasing activity in young children? It is also important to look at the evidence around the effectiveness of weight management interventions and those studies which take into account the child’s perspective around obesity and weight management, such as the systematic reviews by Whitlock et al. (2008, 2010), Rees et al. (2009), Waters et al. (2011), and Bleich et al. (2013). There is also guidance and a framework to enhance a practitioner’s effectiveness in tackling obesity specifically through the Healthy Child Programme (HCP) (Hunt & Rudolf, 2008; Rudolf, 2010), and NICE has developed guidelines for healthy weight management in children and adults (NICE, 2015). Training for health professionals is also available via the charity HENRY (Health, Exercise & Nutrition for the Really Young; www.henry.org.uk), which equips practitioners to support families to develop healthier lifestyles. HENRY’s evidence-based intervention focuses on three elements: information about food and activity, parenting skills, and behaviour change.


Once the programme has been devised, we then have to meet the challenge of putting it into action. We may find that in the early stages that we need to make modifications. For example, we may find that it is better to put the formal talk first and the demonstrations afterwards. Perhaps it will be necessary to enlist the support of some colleagues and do more group work. There are always teething problems when launching any new programme, and time is well spent in the early stages making adjustments and modifications. Evaluation of the programme is best thought about at the development stage, in order to show its benefits and that it has achieved its specific goals/objectives. It is important to remember that it is usual to evaluate a programme in terms of its goals, and the goal-measuring criteria are the key to the evaluation. In addition, we may wish to collect supplementary information which might help us with future planning. We could, for example, ask those attending the sessions to suggest ways in which we could improve the healthy eating club. Our evaluation – that is, our judgment about whether or not the programme was a success or failure – may feed back into future programmes. Activity 7.4 provides you with an opportunity to start thinking about designing a new programme to be implemented within health visiting practice.


Evaluation and evaluative research


Although the evaluation component of the care planning process is able to substantiate its claims, there are two reasons why it cannot be said to constitute evaluative research. First, in evaluative research, the main thrust of the activity is directed towards measuring how far intervention has achieved or not achieved its goals, whereas in the care planning process the measurement of the effectiveness of care is subsidiary to the primary goal of giving care. Owing to the secondary purpose of evaluation in the care planning process, goal statements may not be recorded; however, health visitors and other practitioners are involved in making judgments of worth about the care which they give, whether they record it or not. Many of these value judgments will be made on the basis of systematically collected data and experience, and the evaluation criteria may vary between individuals. Second, in evaluative research, data are collected on a predetermined target population, and therefore findings may be related to more than one individual, whereas in the evaluative component of the care planning process the health visitor evaluates the care she gives to each individual and is not in a position to determine her clients in the same way as a researcher determines the sample.


All in all, evaluation may best be viewed as a continuum. The evaluation component of the care planning process can be placed almost anywhere along this continuum, depending upon the way in which the data are collected (systematically or otherwise) and recorded (see Figure 7.1). Data which have been systematically collected and recorded during the execution of care planning may be used retrospectively by health practitioners for research purposes. The retrospective use of the material may be referred to as ‘evaluative research’, because evaluation and not care giving has become the main thrust of the activity and the researcher is able to determine the population to be studied.

nfgz001

Figure 7.1 Continuum of evaluation.


Evaluation of health care


Concern for the measurement of the quality of health care provided to patients/clients and attempts at evaluating care are not new. It may be possible to get agreement on the importance of evaluation. There is less agreement, however, about what will be evaluated and how the evaluation will take place. The US Agency for Healthcare Research & Quality (AHRQ) provides guidance on improving quality to those involved in designing or evaluating new interventions, aimed at improving care coordination. It discusses five key elements: (i) assessment of needs for coordination; (ii) identification of options for improving coordination; (iii) selection and implementation of one of the alternatives; (iv) evaluation to determine effects on care coordination and outcomes of care; and (v) amendments, if required (McDonald et al., 2007).


The North American literature on evaluation dates back to Derryberry (1939), who, in a report concerning the accomplishments of nursing, stated:



In the past, evaluation of nursing services have been based upon volume and intensity of service…Evidence of the more elusive quality of service as expressed by the changing state of the patient has been sought in the present analysis.


(Derryberry (1939: 2035)


It is interesting that more than 75 years have passed since Derryberry made this plea to move away from evaluations based on volume and intensity of service in favour of focusing on patient/client outcome. Within the context of health visiting, we have unfortunately failed to move very far forward. It is only with policy recommendations on improving and measuring the effectiveness of care and the drive towards quality in health care that there has been a push towards using defined outcomes measures or PROMS. This has been taken up by the NHS Health & Social Care Information Centre (HSCIC) in measuring quality from the patient’s perspective, largely focusing on clinical procedures, and using PROMS to measure health gain after surgical treatment. This will be expanded to more areas of health care in the future (see www.hscic.gov.uk/proms).


Data of relevance to health visiting work are limited. There is often little feedback to health visitors on the activity data they record for client contact. The data most commonly used to provide insight into the work of health visitors at both a national and a local level are the statistical data which relate to frequency of health visitor visits as part of the service specification. With the increase in the number of health visitors between 2010 and 2015, NHS England developed indicators for monitoring service delivery. These are referred to as ‘health visitor service delivery metrics’; the most recent data available are provided in Table 7.2

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 17, 2017 | Posted by in NURSING | Comments Off on Evaluating Practice

Full access? Get Clinical Tree

Get Clinical Tree app for offline access