Chapter 1
Managing Knowledge in Health Visiting
Kate Robinson
University of Bedfordshire, Luton, UK
Introduction
The mantra of evidence-based practice (EBP) is now heard everywhere in healthcare. This chapter will explore what it might mean, both theoretically and in the context of everyday health visiting practice. Is it a way of enhancing the effectiveness of practice or yet another part of the new managerialism of guidelines, targets, and effectiveness? Why might EBP be an important ideal? When a practitioner intervenes in a client’s life, the outcome should be that the client is significantly advantaged. In health visiting, that advantage can take many forms: the client can have more and better knowledge, they might feel more capable of managing their affairs, they might better understand and be able to cope with difficult thoughts, feelings, and actions – the list is extensive. Later chapters will detail the ways in which health visiting can lead to better outcomes for clients and communities. However, the proposition that there should be an advantage derived from the practitioner’s intervention is particularly important in the context of a state-financed (i.e. taxpayer–funded) healthcare system. If an individual wishes to spend their money on treatments or therapies of dubious or unexplored value offered by unregulated practitioners, then that is entirely a matter for them, provided that they have not been misled or mis-sold! However, when the state decides to invest its resources in the provision of a particular service and associated interventions then arguably there has to be some level of evidence or collective informed agreement which gives confidence that the choice is justified. In addition, of course, every health visitor must be able to account for what she does and doesn’t do to the Nursing and Midwifery Council (NMC), if required.
Chapter 7 explores how health visiting might be assessed, measured, and evaluated. The emphasis in this chapter is on how we choose, individually or collectively, to develop particular services and perform particular actions which we know with some degree of certainty should lead to better outcomes for the client. But how do we know things with any certainty? What sort of knowledge do we need to make good choices? Although there are very many different ways of categorising or describing forms of knowledge, for our purpose here it will be sufficient to make some simple distinctions. We might categorise knowledge by type. For example, Carper’s (1978) categorisation of knowledge as empirical (largely derived from science), aesthetic (or artistic), ethical, or personal is well known and is used in nursing. Or we might categorise it by source, and ask where it comes from (books, journals, other people, personal experience, etc.). Or we might use the simple but important distinction between knowing that and knowing how (McKenna et al., 1999). For example, I can know that swimming pools are places people go to engage in swimming and other water sports without ever having been to a swimming pool, but I can only say I know how to swim if I can do so. In the former case, I can probably explain how I came by the knowledge, but in the latter, I may not be able to explain how I know how to swim or what I am doing when swimming; the knowledge statement I know how to swim is dispositional: its truth is determined by my ability to swim. Such ‘knowing how’ knowledge is sometimes called ‘tacit knowledge’, in contrast to ‘explicit knowledge’ or ‘knowing that’. Our concern here is less about how theoretically we might define knowledge than about the question of what sort of knowledge health visitors could and should be using – and who says so – and what sort of knowledge they are using. There is substantial controversy here, as various factions argue that their type or source of knowledge is the most important. And the outcome of what might be argued to be a fight to define the ‘proper’ knowledge basis for practice is important as it has the potential to impinge directly on the health and safety of the client and on the degree to which health visiting can be said to ‘add value’ to clients.
In later sections of this chapter, we will look more closely at EBP, which is currently the dominant knowledge protocol in the National Health Service (NHS), and try to establish what forms of knowledge it valorises – and what forms it discounts – and why. The chapter will also look at reflective practice (an alternative protocol for generating and managing knowledge about practice that is supported by many institutions and individuals within nursing) and at the idea of knowledge being generated and managed within communities of practice (CoPs) (an idea that is popular in education and some other public sector areas); each of these can be viewed as a social movement, with enthusiastic advocates trying to ‘capture’ the support of key health organisations and institutions, as well as the hearts and minds of individual practitioners. We will also look at what is known about the types and sources of knowledge that healthcare practitioners actually use in practice – which prove to be somewhat different from any of the ‘ideals’ promoted by these social movements.
But before examining any of these ‘ideal’ types of knowledge management, it will be useful to remind ourselves about the practice of health visiting. For evidence-based health visiting or reflective health visiting or any other imported concept to be a reality, it must be integrated into the taken-for-granted, existing ways in which health visitors go about their business. But defining or describing health visiting is not simple. If we start by looking at what the government thinks it is, then we must recognise that, in the UK, health visiting is practised in four nations (involving two assemblies and two parliaments), each of which has a different idea of what health visitors should do, and to what ends. We then have the view of the profession as a whole, which is expressed through various collective means. But when we try and look at the actual practice of health visiting, we find that there is a lack of shared knowledge about what goes on in the very many interactions which lie outside of the public domain of hospitals and clinics. Despite these difficulties, the next two sections will look briefly at the contexts in which health visitors manage knowledge.
Defining health visiting practice
The Department of Health commissioned a review of health visiting, Facing the Future (DH, 2007), aimed at highlighting key areas of health visiting practice and skills. This is not a wholly research-based document – and makes no claims to be – although there are some references to research. Rather, ‘this review is informed by evidence, government policy and the views of many stakeholders’ (DH, 2007: §1). Decisions about what health visiting should be about are therefore largely presented as decisions for the community of stakeholders in the context of stated government priorities. Key elements of the decision-making process can be seen as pragmatic and commonsensical – in the best sense. For example, the review argues that the health visiting service should be one which someone will commission (i.e. pay for), one that is supported by families and communities (i.e. acceptable to the users of the service), and one that is attractive enough to secure a succession of new entrants (i.e. it has a workforce of sufficient size and ability).
In terms of the future skills of health visitors, the review is clear that they will be expected to be able to translate evidence into practice – although it is less specific about what sort of evidence will count and how the process will be managed. However, at the national level, it recommends that the relevant research findings to support a 21st-century child and family health service be assembled. There is also some indication that future practice will be guided by clear protocols: ‘Inconsistent service provision with individual interpretation’ will be replaced by ‘Planned, systematic and/or licensed programmes’ (DH, 2007: recommendation 8). As we shall see, the reduction in variations in practice is one of the key aims of the EBP movement. In terms of evidence underpinning practice, the document also draws specific attention to the expanding knowledge base in mental health promotion, the neurological development of young children, and the effectiveness of early intervention, parenting programmes, and health visiting. Clearly, this is a very broad base of evidence, derived from a range of academic and practice disciplines.
So, while the review is not specifically about the evidence or knowledge base of health visiting and how it might be used, many of the relevant themes in debates about EBP begin to emerge. For example:
- What is the role of the practitioner in assembling and assessing evidence?
- How can evidence be translated into practice?
- What counts as evidence?
- How can other bodies support the practitioner by generating and assembling evidence?
- How can any practitioner be conversant with developing knowledge bases in a wide variety of other disciplines?
- What will be the role of protocols, guidelines, and ‘recipes’ for practice?
These questions all remain relevant, and health visiting commissioners, managers, and practitioners attempt to answer and reconcile them at all levels of practice. However, at the highest level of government, where the health visiting service is created and defined, significant changes in the knowledge base have been used to refocus the purpose and practice of health visiting. The new knowledge largely stems from the neurosciences and developmental psychology, and not from within health visiting itself, and is concerned with how and when brain development occurs. It underpins the premise that early intervention in every child’s life – starting from conception – to optimise brain development is a key plank in strategies aimed at improving educational attainment, reducing crime and antisocial behaviour, reducing obesity, and improving health. Perhaps the most robust expression of what might be called the ‘early intervention movement’ is the first of two government reports by the Labour MP Graham Allen: Early Intervention: The Next Steps (Allen, 2011a). The context of Allen’s report is the UK fiscal deficit and the Conservative/Liberal Democrat coalition government’s agenda of addressing this deficit by making substantial reductions in public spending. Indeed, Allen’s second report (Allen, 2011b) is entitled Early Intervention: Smart Investment, Massive Savings. In order to emphasise the need for early intervention, his first report starts with two images of a child’s brain: one from a ‘normal’ child and one from a child who has suffered significant deprivation in early childhood. The differences in neurological development are obvious and striking, even to a lay reader, but the important conclusion from the evidence is that such damage is caused by poor parenting, is largely permanent, and is the cause of significant problems in the child’s behaviour, which both impede the well being of the child and damage society. These are claims which stem from research that is not easily accessible to health practitioners or their clients. It is also research that is ongoing, with claims being contested and disputed: work by Noble et al. (2015) identifies family income and parental education as being the prime correlates to neurological development, for example.
While the claims about neurological development in Allen’s first report remain deeply contested, they have been accepted at the highest levels of government, so the questions of what to do and who will do it become acute. In terms of what to do, policy makers look to evidence-based, precisely defined packages of action that have been robustly evaluated to provide the most secure way forward. Allen (2011a: ch. 6) identifies 19 programmes (e.g. the Family Nurse Partnership (FNP)) which he believes should form the basis of early intervention because of their targets and proven efficacy. Such intervention packages have been developed in many countries, often by private agencies, and need either to be incorporated into ‘traditional’ ways of working – or to replace them (see Chapter 5). So we now have complex bodies of evidence about both a perceived problem and a systematic solution used to prescribe practice. Such packages do not just say what must be done but also define how it must be done, and we will look at some of the issues raised by them in a later section. In practice, the responses to the early intervention imperative have varied between the UK’s nations, each having to answer such questions as: To what degree should intervention be targeted, and at whom? Who should carry out the intervention, and do they have the capacity and capability? and How should we ensure that the work is carried out consistently and effectively?
The Department of Health in England responded with the creation of a revitalised and expanded health visiting profession. The Health Visitor Implementation Plan 2011–15 (DH, 2011) proposed that health visitors provide a four-level service, with services allocated according to the needs of the child and family. There was also to be increased recruitment and training, including an emphasis on leadership development. While this was welcomed by the profession, there were dissenting voices. The Lancet, for example, posted a commentary by a public health doctor who supported the emphasis on early years intervention but argued that ‘This policy takes a narrow approach, concentrating investment in expanding professional capacity in a service which can only provide part of the solution’ (Buttivant, 2011). And the Department of Health itself commissioned a major literature review (Cowley et al., 2013, 2015) to try and identify evidence to support the policy. In the other nations of the UK, different approaches were taken. In Scotland, for example, health visiting as such remained comparatively marginalised until in 2013 the Chief Nursing Officer required that ‘the current Public Health Nursing (PHN) role…should be refocused and the titles of Health Visitor and School Nurse be reintroduced’ (Moore, 2013: summary). The health visitor was to work with children aged 0–5 using ‘targeted’ interventions. Part of the rationale for the change was evidence that the public understood and preferred these ‘traditional’ titles. In Wales and Northern Ireland, too, there was increased focus on early years, although local policies reflected local traditions and ambitions. So, while no-one was disputing the knowledge base of an early intervention strategy, there have been considerable differences in the way this translates into policy for practice. High-level policy makers have their own ideological commitments and knowledge of local history, which mediate between knowledge and policy (for practice). Research rarely dictates policy, but it does inform it.
In England, the three key policies of early intervention, evidence-based pathways, and health visitor leadership remained throughout the defined years of the Health Visitor Implementation Plan 2011–15. The National Health Visiting Service Specification 2014/15 (NHS England, 2014) continued to make explicit reference to the evidence base of the Allen (2011a) report: ‘Research studies in neuroscience and developmental psychology have shown that interactions and experiences with caregivers in the first months of a child’s life determine whether the child’s developing brain structure will provide a strong or a weak foundation for their future health, wellbeing, psychological and social development’ (NHS England, 2014: 1.1.3 p. 5). The four levels of intervention remained in place and explicit reference was made to care pathways. Additional specifications for practice come in the form of required assessment protocols. This national specification is reflected in local practice handbooks. As an example, a practice handbook for health visiting team members published in 2012 by the Shropshire County Primary Care Trust (PCT) (Langford, 2012) ran to 43 pages of prescription concerning when visits were to be made and what should be done in each. The document is rich in references – ‘Evidence/Rational’ (sic) – but these are largely not to original research but to recipes for action; for example, it specifies 10 assessment tools. So, by 2015, the idea that health visiting was an innovation technology rather than an individualistic practice was well established, at least in England and some other parts of the UK. Health visiting practice was conceived of as something which could be prescribed to solve defined national problems. Such policy prescriptions are not confined to health problems but can also be found elsewhere in social care, education, and the justice system – usually in areas where governments are particularly concerned to achieve particular outcomes. For example, a similar approach was taken in the case of another perceived threat to society – (Islamist) terrorism – where schools were encouraged to use packaged interventions designed to prevent the radicalisation of children.
From the point of view of the profession as a whole, the resurgence of health visiting was seen as an opportunity to raise its profile and consolidate its gains. A new body, the Institute of Health Visiting (iHV), was founded in 2012 with the avowed core purpose of raising ‘professional standards in health visiting practice…By promoting and supporting a strong evidence base for health visiting and offering CPD [continuing professional development] and professional training’ (iHV, 2012). In other words, it sought to improve practice not by telling health visitors what to do but by improving their knowledge and skills. A central part of the work of the iHV is therefore the development of various ‘tools’ to help practitioners enhance their practice and guide them through an increasingly complex world of guidelines, pathways, programmes, and protocols and an expanding research base involving many disciplines. These tools are not ‘prescriptions’ of good practice but rather provisions of access to learning opportunities, case studies, publications, and Web-facilitated channels for practitioner–practitioner and practitioner–expert interaction. You could characterise this as a ‘bottom-up’ process of using evidence to improve practice, in contrast to the ‘top-down’ process of prescription based on policy, but as we shall see, both models remain part of the EBP ideology. Within EBP, there is also a substantial body of work exploring how knowledge management fits into the everyday realities of practice. So, what do we know of actual practice in health visiting – its opportunities and constraints?
What do health visitors do – and where do they do it?
Against the background of the government seeking to prescribe health visiting practice as a remedy for society’s ills, it is important to review what is known about the actual practice of health visiting; that is, what health visitors do on a day-to-day basis. Unfortunately, relatively little is known – other than tacitly by those who do it – about the realities of everyday health visiting. That it is rarely seen as a valid subject either for scientific research or for practice narratives is also true of a similar practice: social work. In the case of social work, however, we find an interesting research programme conducted by Harry Ferguson (2008, 2010), which aims to bring to light the essential nature of its practice. Ferguson argues that current research is focused on systems and interprofessional communication, which ‘leaves largely unaddressed practitioners’ experiences of the work they have to do that goes on beyond the office, on the street and in doing the home visit’ (Ferguson, 2010:1100). Ferguson is trying to refocus on actual practice; he further argues:
Reclaiming this lost experience of movement, adventure, atmosphere and emotion is an important step in developing better understandings of what social workers can do, the risks and limits to their achievements, and provides for deeper learning about the skilled performances and successes that routinely go on.
(Ferguson, 2010:1102)
Of course, this is just as true for health visiting, where a significant part of the practice is leaving the office, driving to the client, thinking about how the visit will work, knocking on the door, and so on. Ferguson’s account of the excitement and fear of walking through disadvantaged neighbourhoods and of negotiating home visits with disobliging clients is focused on social workers working in child protection, but it must resonate with all practising health visitors. The way in which he conceptualises the home visit is of particular interest: ‘All homes and the relationships within them have atmospheres and how professionals manage stepping into and negotiating them is at the core of performing social work and child protection and managing risk effectively’ (Ferguson, 2010:1109).
So how would the ever-useful Martian sociologist describe health visiting practice? They would be bound to notice that it is largely about doing things with words. Note the emphasis on doing; talk isn’t just something which surrounds the doing, it is the doing – praising, blaming, asking, advising, persuading: every utterance is an action produced for a purpose, although the speaker is rarely consciously aware of this. The skills involved in talking are so deep that, just like with walking, they are not normally subject to constant ongoing analysis. Most of us do not consciously think about how to walk – we just do it. But talk is the health visitor’s key performative skill, and because doing things with talk is a primary skill, health visitors need a more profound understanding of how it works – just as a ballet dancer would need a more profound understanding of how her body works than would the person taking the dog for a walk. Of course, as well as talking, health visitors also make notes and write reports, but this is still doing things with language in order to interact with others.
In the 1980s, there was considerable interest within sociology in researching how interactions, largely based on talk, could constitute various forms of institutional practice. This idea was rather neatly defined in an edited volume of studies called Talk at Work. The editors argue:
that talk-in-interaction is the principal means through which lay persons pursue various practical goals and the central medium through which the daily working activities of many professionals and organisational representatives are conducted.
(Drew & Heritage, 1992: 3)
Health visiting is one such profession, and a number of studies have been conducted within that sociological tradition (see, for example, Dingwall & Robinson, 1990; Heritage & Sefi, 1992). The focus is on making available what happens in the ‘private’ world of the home visit. Cowley et al. (2013), in their extensive review of health visiting literature, reinforce the centrality of the home visit in health visiting, arguing that it is one of the three key components of practice (the other two being the health visitor–client relationship and health visitor needs assessment).
Health visitors also work in clinics, general practitioner (GP) surgeries, children’s centres, church halls, social services departments, and so on. So a further defining characteristic of health visiting is that it does not have a fixed locality or place of work. There is an interesting literature on the issue of place in healthcare (see, for example, Angus et al., 2005; Poland et al., 2005), and of course it relates to the issue of mobility which is central to Ferguson’s (2008, 2010) work. Poland et al. (2005) argue that, while practitioners are sensitive to issues of place, this has largely been ignored in debates about best practice and EBP. They further assert that:
Interventions wither or thrive based on complex interactions between key personalities, circumstances and coincidences…A detailed analysis of the setting…can help practitioners skilfully anticipate and navigate potentially murky waters filled with hidden obstacles.
(Poland et al., 2005: 171)
By ‘place’, Poland et al. (2005) mean a great deal more than mere geography. The concept includes a range of issues, notably the way power relationships are constructed and the way in which technologies operate in and on various places. Alaszewski (2006) draws our attention to the risk involved in practising outside ‘the institution’. While there are ways in which physical institutions mitigate the risks from their clientele:
The institutional structure of classification, surveillance and control is significantly changed in the community. Much of the activity takes place within spaces that are not designed or controlled by professionals, for example the service user’s own home.
(Alaszewski, 2006: 4)
The discussion in this section draws on concepts and evidence from a number of sources, which can be used as vehicles for thinking about health visiting. But, as Peckover (2013) points out, we do not have a coherent body of research on the reality of health visiting practice. Cowley et al. (2015: 473), in their review of the literature, acknowledge that their work has revealed the concepts and theories underlying health visiting but not ‘the forms of practice that exist in reality.’ We know what health visitors aim to do but not what they actually do. Peckover argues that this lack of a ‘meta-narrative’ for health visiting is both a weakness and a strength: a weakness because it struggles to explain itself to policy makers and to establish a strong base in higher education, but a strength in that it seems to be able to adapt to changing demands. Given the complexity of health visiting, we need to look at the top-down prescriptions for practice and ask, first, how we can reconcile the practice prescriptions of the policy makers and managers with what we know about what Ferguson calls ‘the fluid, squelchy nature of practices…’ (Ferguson, 2008: 576), and, second, how we can source evidence to support the parts of practice which do not, or do not yet, fall within the realm of defined practice. Can the concepts and practices of EBP and knowledge management help?
Evidence-based practice
In order to understand the importance of the EBP movement, you need to take yourself back in time about 30 years. Back then, doctors and nurses did what they had been taught to do; experienced practitioners became teachers and passed on what they had learned in their years of practice. There was almost no reference to research findings, but lots of reference to both ‘facts’ and ‘proper ways of doing things’. That is not to say that there was no innovation: new drugs became available and there were surgeons trialling procedures we now take for granted, such as joint replacements. But the idea that the way to do things in healthcare was passed on from previous practitioners was prevalent. So the idea of EBP was really revolutionary – and there was considerable opposition to it.
What has come to be known as EBP had its foundations in the evidence-based medicine (EBM) movement, which started in the UK in the early 1990s. The NHS was interested in funding and promoting research, and there was a research infrastructure. However, there was increasing dissatisfaction among some key individuals in the medical profession – notably Dr (now Sir) Muir Gray, who was an NHS Regional Director of Research and Development – over the fact that, within medicine, treatments which had been proven to be effective were not being used, while treatments which had been shown to have no or little beneficial effect continued to be used. This was despite considerable efforts to change practice; for example, the Getting Research into Practice and Purchasing (GRIPP) project, developed in the Oxford NHS region, looked at four treatments:
- the use of corticosteroids in preterm delivery;
- the management of services for stroke patients;
- the use of dilation and curettage (D&C) for dysfunctional uterine bleeding;
- insertion of grommets for children with glue ear.
Good research evidence was available to underpin decisions in all these areas of practice, and health authorities within the Oxford region sought to ensure that practice adhered to the research-based recommendations. However, variations in practice proved difficult to eradicate, and it was felt that more needed to be done. Did the practitioners not understand the research? Did they need motivation to change from their traditional ways of practice? Perhaps a more widespread and coordinated effort to base practice on research needed to be developed.
The fundamental proposition of the subsequent EBM movement was that practice should take account of the latest and best research-generated evidence to underpin both individual clinical decision making and collective policy making. At the heart of EBM is the idea that it provides a vehicle by which the practitioner can continually examine and improve their individual practice by testing it against scientifically validated external evidence and importing proven treatments. Activity 1.1 will help you to explore the evidence around interventions delivered by health visitors.
Sackett et al. (1997) define EBM as consisting of five sequential steps:
- 1. Identifying the need for information and formulating a question.
- 2. Tracking down the best possible source of evidence to answer that question.
- 3. Evaluating the evidence for validity and clinical applicability.
- 4. Applying the evidence in practice.
- 5. Evaluating the outcomes.
So, for example, a doctor faced with a patient with a severe infection might ask, ‘Which antibiotic will best cure this infection?’ and look to the literature on drug trials for an answer. Thereafter, they would evaluate the validity of the trial and its relevance to their patient, administer the drug (or not), and see what happened. Or, to use one of the examples from the GRIPP project, a doctor treating a child with ‘glue ear’ might ask, ‘Will surgery to insert grommets make a difference in the long term compared with conservative treatment?’ A search of the literature would indicate that surgery to insert grommets is not necessarily cost-effective in the long run in terms of outcome. But this example illustrates a complexity that the rational model of EBM does not necessarily deal with. At the point that the doctor opts for conservative treatment, what message is conveyed to the parent with a child who has suddenly gone deaf and who is losing both speech and friends? The research evidence on cost-effectiveness may not fully acknowledge the social issues surrounding the clinical problem. EBM is essentially a linear model for change which assumes that clinicians should make rational choices based on the scientific evidence available to them. It does not necessarily take into account the choices that clients would make, which might be equally rational for them. Activity 1.2 will be helpful in gaining some experience in the practice of EBM.
EBM defines the best source of evidence as the randomised control trial (RCT), or better still a group of RCTs, which can be systematically reviewed and analysed. Early on in EBM, the idea was that clinicians would get involved in all stages of the process, including the search for and evaluation of the evidence, and there were – and are – various manuals and training programmes to help them do that. This can be defined as the simple linear model of practitioner-based EBP, which is still espoused by some. But, in practice, a cadre of specialist and largely university-based ‘experts’ has grown up to manage the search for and evaluation of the scientific evidence and to produce specifications for practice, which are then disseminated through various fora. These specifications are known by a number of names, including ‘clinical guidelines’ and ‘care pathways’, and their use will be explored later in the chapter. The degree to which any specification will constitute a suggestion or an instruction to practitioners largely depends on the importance of the topic and the costs of that area of practice. The contrast between two propositions found in EBM – that individual practitioners should evaluate the evidence and change their practice accordingly and that evaluating evidence is an expert skill requiring considerable resources – remains important. Research evaluation is a key component of many healthcare curricula, but the degree to which it might or should be a key component of practice remains contested.
So, the EBM movement has been, and continues to be, subject to considerable debate and criticism. However, there is a danger that it is criticised for ideas which it does not wholly espouse.
First, its initial proponents did not suppose that the use of research evidence would entirely override clinical judgment, but rather that it would work in conjunction with it:
External clinical evidence can inform, but can never replace, individual clinical expertise and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision.
(Sackett et al., 1997: 4)
Second, while it is true that a hierarchy of evidence was proposed, which placed that derived from RCTs at the top as the ‘gold standard’, it did not assert that other forms of evidence were not of some value, and neither did it entirely ignore evidence derived from qualitative research (Glaszious et al., 2004).
Early EBM was an enthusiasts’ movement, but a whole industry has since grown up around it, and it is now central to government health policy and is spreading into other occupations. So, who is supporting the development of EBM and its promotion in new disciplines such as nursing, social work, and education – and why?
First, there is a lobby from researchers. After all, if no-one uses their work then why should government continue to fund it? Healthcare research is now a substantial industry, forming a significant part of many university budgets. New journals have sprung up to explore the issues, and, of course, publication is the lifeblood of academics. Gerrish (2003), citing Estabrooks (1998), argues that EBM has generated a shift in power and prestige in healthcare from experienced expert clinicians to researchers.
Second, there is the government, which is increasingly committed to the development of evidence-based policy making in many spheres, certainly including health. A range of organisations have been established to support EBM and fund research designed to feed directly into practice, including the Cochrane Collaboration (which exists to produce systematic reviews), the National Institute for Health and Care Excellence (NICE), and a number of university-based units, such as the University of York Centre for Reviews and Dissemination. Within government-funded research programmes, there has been an increased emphasis on ‘impact’, in addition to validity, reliability, and so on. Activity 1.3 will help you to explore elements of effective health visiting practice.
Third, there are the nurses, social workers, and teachers themselves. Although there was (and is) some concern within medicine that EBM would erode the importance of clinical judgment, in these professions the idea of developing a strong formal and recognised evidence base was seductive. A few decades ago, the theory that a profession needed to have certain characteristics became popular in occupations such as nursing, social work, and teaching. While the theory itself was deeply flawed, as it largely ignored issues of power and prestige based on class and gender, it did inspire a section of nursing to fight for an independent regulatory body – now the NMC – and for graduate entry to the occupation, which has now been realised with the 2010 change in NMC regulations. This professionalising agenda has extended to a belief that a ‘proper’ profession will have – and use – an extensive evidence base gleaned from research; that is, it should aspire to be an ‘evidence-based’ profession. Consequently, some nursing constituencies have vigorously championed the development of nursing research and the inclusion of nursing in multidisciplinary research – and indeed there has been a very rapid expansion of nursing research, although much of it remains small-scale (Cowley et al., 2013, 2015).
Fourth, there is the consumer, who increasingly wants the ‘best’ treatment available and is intolerant of variations in practice – or ‘postcode lotteries’. This may in part be fuelled by media reports of research ‘breakthroughs’. However, the consumer’s attitudes are at best ambivalent – the extensive and growing use of ‘alternative’ therapies, many of which have a research evidence base which is slight at best, shows that the consumer also wants to decide for themselves what works. Activity 1.4 will help you to explore this further.
So, we can conclude that powerful forces have fuelled the development of the EBM movement and have vested interests in its success. More fundamentally, like any social movement, it had to be in the right place at the right time. A number of factors seem to have been crucial. Importantly, the oil crisis of the mid 1970s forced Western industrial societies into financial panic. Muir Gray acknowledges the importance of this economic crisis in the development of EBM (cited in Traynor, 2002). Never again would the price of something not matter, and state-funded healthcare represents a massive part of government expenditure. When doctors undertook operations for glue ear with no proven benefit, that was no longer just their decision. And partly as a result of the economic crisis, society was also changing. Traynor (2002) defines key products of this new emphasis on fiscal control to be the rise of managerialism, the increased use of audit, and an increased emphasis on research and development (R&D). In addition, society was increasingly conscious of risk but wary of the power and authority of both science and professions to provide solutions. How did EBM fit into this landscape? In theory, having sufficient research evidence to specify ‘best practice’ allowed managers greater control over individual practitioners, and audit systems ensured that this control was maintained. Although EBM is based on a science embedded in experimental work, it was not a scientific ‘grand narrative’; rather, it provided ‘recipes’ for best practice, which would, in theory, reduce variations in practice and control risk. A further key element in the success of EBM – and in making it a worldwide phenomenon – is the exponential growth in information technology. Without the ability to search digital databases worldwide, EBM would be a much reduced enterprise.
The concepts behind EBM have spread to other healthcare occupations, and subsequently beyond healthcare into management, education, and social work; it is commonplace now to describe the movement as EBP. In 2008, NICE was given a remit for work in public health, including disease prevention and health promotion. Changes have thus had to be made to the way in which EBP operates even within the heartland of medicine. Kelly et al. (2010) offer an ‘insider’s’ perspective on some of these challenges as they work within NICE on the public health agenda – which of course goes beyond healthcare into education, social welfare, and so on, and depends on disciplines such as psychology, sociology, and anthropology. In moving into new areas, institutions such as NICE have had to travel beyond biomedicine, with its relatively simple causal models, and engage with very different academic and practice disciplines with their own distinct ways of generating and validating knowledge. A fundamental problem is that the EBM methodology for generating evidence, which gives superiority to RCTs, is not going to work. Few such trials are conducted outside of biomedicine, and much of the knowledge in social science disciplines is generated by the use of theories and models, which are not amenable to the sort of meta-analysis to which trials can be subjected:
Theories and models require a different way of encapsulating their form and content, their provenance, their ideological dispositions and so on. They are not facts in the sense that someone’s occupation or systolic blood pressure are facts. Theories are ways of organising ideas, usually designed to make observable facts clearer or more coherent, or to offer some kind of explanation for the particular way the facts are, or appear to be.
(Kelly et al., 2010: 1059)
If these differences in the way in which knowledge is generated and validated cannot be acknowledged then much of the knowledge of these disciplines will be disregarded as being of lower status or as including bias.
A further problem is that in many public health issues there is a long causal pathway between an intervention and the change it is designed to create, and this creates conceptual complexity not encountered when testing drug A against drug B. Kelly et al. (2010) outline some of the ways in which they are engaging with these issues, which include both creating new methodologies (e.g. developing logic models to manage methodological pluralism) and trying to use experts in the field to generate consensus.
Issues underlying the use of primary research were highlighted in 2015 when an extensive study by leading academics within psychology published data showing that ‘many psychological papers fail the replication test’ (Open Science Collaboration, 2015). The ability to repeat an experiment and get the same results is a cornerstone of the scientific method, so clearly this called into question the original results. This study is part of an ongoing debate about how we conduct not just psychology but all science and whether issues such as a pressure to publish positive results skew the literature. A key contribution to the debate (Ioannidis, 2005) is entitled ‘Why most published research findings are false’ and a commentary by Richard Horton (2015) in The Lancet argues that the situation is deteriorating. Horton suggests that many aspects of research culture are contributing to ‘bad science’: ‘The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world’ (Horton, 2015: 1380).
So, despite its success in embedding itself into national structures and in spreading into new fields, EBP remains a highly contested concept and an evolving practice. Even within EBM, there were many concerns, which were articulated early on in a useful summary document titled Acting on the Evidence (Appleby et al., 1995), produced by the University of York. This document summarises EBM as ‘the movement away from basing healthcare on opinion or past practice and towards grounding healthcare in science and evidence’ (Appleby et al., 1995: 4). It raises a number of issues. First, it argues that insufficient account is taken by EBM of the uncertainty of clinical practice. Second, it says that it is impossible to generate information on everything – a key issue for health visiting, which exists in a highly complex epistemological and social context. Third, it notes that information about clinical effectiveness generated by RCTs is about populations, whereas clinicians deal with individuals:
How rigid do we expect the doctor to be in reconciling the scientifically derived probabilities of clinical effectiveness with the situation of the individual patient?
(Appleby et al., 1995: 30)
The current landscape of EBP
As we look across the new occupations engaging in EBP, we can see three interesting responses to the original concept, each of which will be explored more fully in the following sections. First, there are those who make theoretical objections to EBM, and particularly to its export into other areas. This is probably best exemplified in a published ‘dialogue’ between Iain Chalmers, a key figure in the EBM movement, and Martyn Hammersley, a leading figure in the sociology of education and research methods, which will be described shortly. Second, there are those who are quite enthusiastic about EBP but dismayed that it just doesn’t seem to change practice. This has produced what might be called the ‘barriers’ literature, which attempts to identify and eradicate the reasons why EBP doesn’t work and has developed into an industry devoted to what has become known as ‘implementation science’. Third, there are those who embrace the concept of EBP but who want to redefine the notion of what counts as evidence – largely because it doesn’t seem to resonate with the realities of their practice. Nursing in particular has criticised the technological model of knowledge used in EBM, and has argued that the linear model of research evidence utilisation may not be wholly appropriate to nursing practice.
From within the discipline of education, Martyn Hammersley has produced one of the most accessible critiques, engaging directly with the arguments of major supporters of EBP – notably Iain Chalmers, who wrote an article in support of EBM entitled ‘Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations’ (Chalmers, 2003). Hammersley’s response is direct: ‘Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmers’ case for research-based policy making and practice’ (Hammersley, 2005). He seeks first to establish common ground, suggesting that there should be broad agreement about the following propositions:
- Practitioners occasionally do harm in their professional work.
- Research can help provide practitioners and policy makers with useful information.
- Not everything presented as research is either reliable or, indeed, research.
Further, Hammersley agrees with Chalmers that research needs to be mediated before it can be used by individual practitioners:
the results of research should be presented to lay audiences through reviews of the available literature, rather than the findings of individual studies being offered as reliable information.
(Hammersley, 2005: 87)
However, he goes on to argue, first, that the methodologies favoured in EBP – the RCT and the systematic review – are themselves subject to methodological critique and should not be assumed to produce bias-free evidence: ‘research findings must always be interpreted and are never free from potential error’ (Hammersley, 2005: 88). This is not an argument about quantitative and qualitative methods, but rather an argument that all forms of research are socially constructed and that all research is generated and read within a particular context of experience and judgment.
Second, he argues that Chalmers, and by extension other EBP proponents, believes that research can arbitrate in areas where there are debates about what counts as good practice. By implication, he suggests that Chalmers has gone beyond the originally proposed ‘partnership’ between external research evidence and clinical judgment to valorise the external evidence. He rejects the idea that RCTs should have a privileged status above other kinds of knowledge and be used to resolve disputes.
Third, he argues that judgment is fundamental to good practice because ‘practice is necessarily a matter of judgment, in which information from various sources (not just research) must be combined’ (Hammersley, 2005: 88). He asserts that that the role of professional judgment may differ between different forms and arenas of practice and argues that downplaying the importance of professional judgment in favour of research evidence could, in some contexts, reduce the quality of practice rather than enhance it.
The dialogue continues with Chalmers’ (2005) response, ‘If evidence-informed policy works in practice, does it matter if it doesn’t work in theory?’, which claims that Hammersley misrepresents his views. Interestingly, Chalmers cites a specific example, familiar to health visitors, of research findings changing the previous ‘commonsense’ recommendations about the way a baby should sleep – on its front or back – as one of the key pieces of evidence supporting the importance and impact of EBP:
These and countless other examples should leave little doubt that it is irresponsible to interfere in the lives of other people on the basis of theories unsupported by reliable empirical evidence.
(Chalmers, 2005: 229)
Hammersley is not the only critical commentator of the EBM movement. For example, Kerridge et al. (1998), writing from a basis in health ethics, argue that EBM has serious ethical flaws. First, while EBM is concerned with outcomes, there are many aspects of outcomes which cannot be properly measured; they cite as examples pain, justice, and quality of life. Second, it is difficult in EBM to decide between the competing claims of different stakeholders. While EBM potentially downgrades the power and authority of individual doctors, who should replace them in the power position? Is it managers? Is it patients? And if the latter, how can that be managed? Third, EBM interventions may transgress common morality because they are concerned only with evidence of efficacy. Kerridge et al. raise issues about the ethical status of trials: on the one hand, there are now strict criteria which might be seen as ‘good’, but on the other, these criteria shift over time. They also argue that RCTs in themselves are subject to ethical questions about ‘the selection of subjects, consent, randomisation, the manner in which trials are stopped, and the continuing care of subjects once the trials are complete’ (Kerridge et al., 1998: 1152).
The literature on EBM and practice is full of such claims and counter claims. But while such debates may be exciting and energising for those involved in them, they can be somewhat bewildering or even daunting to lay (i.e. non-research) practitioners. But they are important in terms of practice. Kerridge et al. cite Dr Michael Wooldridge, then Australian health minister, who said that ‘[we will] pay only for those operations, drugs and treatments that according to available evidence are proved to work’ (Kerridge et al., 1998: 1153). By implication, governments will only support those activities which can be shown to have an effect – and an effect which the government wants.
From a purely practical point of view, what is the evidence that research findings, even when expertly mediated through the Cochrane Collaboration, NICE, or other guideline systems, are – or, indeed, can be – directly applied to practice in the linear model implied by evidence-based practitioners? There is considerable evidence that it is not being applied directly as anticipated, which suggests that we need to think of the relationship between research and practice in more complex terms. In order to examine and explain these problems, a literature developed exploring what were known as the ‘barriers’ to utilising research. If we could just identify and remove those barriers, the argument went, all would be well. Grimshaw and Thomson argued that, ‘Despite the considerable resources devoted to biomedical science, a consistent finding from the literature is that the transfer of research findings into practice is a slow and haphazard process’ (Grimshaw & Thomson, 1998: 20). Grol & Wensing found the same thing:
One of the most consistent findings in health services research is the gap between best practice (as determined by scientific evidence), on the one hand, and actual clinical care, on the other.
(Grol & Wensing, 2004: §57)
These authors studied barriers to change and proposed that they occur at different levels: the nature of the innovation itself, the individual, the social context, the patient, the wider context – really, just about anything. In the UK, Gerrish (2003) explored some of the barriers to introducing research into nursing based on a study within a large acute hospital; she groups them into factors relating to the organisation, the way research is communicated, the quality of the research, and the nurse. Again, it seems difficult to identify anything which might not constitute a barrier. Clearly, some of these factors may include barriers to introducing any kind of change; healthcare organisations are very large and complex, and the healthcare sector is highly regulated and risk-averse. Others are specific to research-based knowledge, and Gerrish argues that the way in which research is conducted and the type of knowledge it generates may be important. The traditional model of EBP, as we have seen, assumes the superiority of acontextual, disembodied technological knowledge and a linear model of utilisation. Gerrish argues that other research models, such as the enlightenment model or action research, might have substantial value. However, the practitioners of implementation science have pursued the idea that barriers to implementation must be overcome and have generated a whole research domain dedicated to exploring not what ought to be done but rather how to ensure that it is done in practice. This work has become another ‘industry’ supporting healthcare, generating its own journals, conferences, and research units. The aim of these practitioners is to create an effective implementation infrastructure. This represents a substantial step beyond the work of the EBM pioneers, who used the language of promoting and disseminating research, assuming that all right-minded practitioners could and would alter their practice in response. Implementation science acknowledges the complex world in which practice takes place and seeks to investigate how programmes can be designed and presented such that they can be implemented in practice. Activity 1.5 explores barriers to implementing research evidence in health visiting.
There is a substantial constituency in nursing which has embraced the concept of EBP, and a supportive base of journals, professional bodies, and university units has been established. This might seem surprising in an occupation which has fought to defend the importance of qualitative research and does not have a substantial tradition of conducting RCTs or systematic reviews (important exceptions in the context of health visiting include Elkan et al. (2000), who systematically reviewed the evidence on the effectiveness of domiciliary health visiting, and Cowley et al. (2015)). Judith Parker, former director of the Victoria Centre for Evidence Based Nursing in Melbourne, provides an interesting perspective on why nursing should embrace EBP in an editorial in Nursing Inquiry (Parker, 2002), in which she feels she has to defend her personal support for EBP, not least because she has a reputation for engaging in research in a different epistemological tradition, which focuses on experience and narrative. She argues that its time has come as the result of a range of economic, political, and market imperatives. She draws attention to the way in which it helps society manage risk, reduce costs, and provide accountability. In addition, she argues that:
It provides investigative and justificatory tools to manoeuvre the morass of uncertainty in situations where decisions must be made without knowing the consequences and where many of the comforting routines of the past have fallen away.
(Parker, 2002: 140)
But other researchers have taken a somewhat different path in reconciling engagement with EBP with their value base. Rycroft-Malone et al. (2004), in an interesting study titled ‘What counts as evidence in evidence-based practice?’, suggest that nurses can reconceptualise EBP by greatly broadening the kinds of evidence which are embraced by the movement in order to make it both more acceptable and more useful. They explore the potential for using four types of evidence: that derived from research; clinical experience; the knowledge of patients, clients, and carers; and the local context and environment. The last is somewhat of a ‘catch-all’ term and includes information from audit and performance, as well as patient narratives, organisational knowledge, local policies, and so on. The authors pose two challenges. First, whatever the source, for knowledge to count as evidence it needs to be examined and tested in some way. So, for example, ‘in order for an individual practitioner’s experience and knowledge to be considered credible as a source of evidence, it needs to be explicated, analysed and critiqued’ (Rycroft-Malone et al., 2004: 84). Second, they argue that we need to develop our collective understanding of how these various evidences are integrated to generate effective practice. It is important to note that this reconceptualising of acceptable evidence goes far beyond the work to expand the evidence base outlined by Kelly et al. (2010). While they are looking to see how other ‘sciences’ can be incorporated, Rycroft-Malone et al. (2004) are developing the concept of useful evidence as coming from outside traditional science.
In the next section, these themes are further explored through case studies of practice, showing real instances of how knowledge is generated and used by practitioners at all levels. However, before we move on, it may be helpful to note an important paper which defines the sources of knowledge currently used by nurses and illustrates some of the themes raised in the last two sections. Estabrooks et al. (2005) explored the sources of nurses’ knowledge through two major ethnographic studies in hospitals in Canada, finding that nurses categorise them into four broad groupings: social interactions, experiential knowledge, documentary sources, and a priori knowledge. Importantly, the category of social interactions dominates their findings. They report that when nurses have immediate and practical concerns, they will turn first to their peers, who can give both information and reassurance, as illustrated by one of their respondents: ‘If one of my colleagues says you know what, D, I have seen that happen time and time again…don’t worry about it, I will be reassured by that’ (Estabrooks et al., 2005: 464). The nurses have a hierarchy of knowledge, but it is not consistent with EBP:
The high regard for experience also caused nurses occasionally to reject advice from clinical nurse specialists, educators, and physicians when they believed that the advice was inconsistent with their own experiential knowledge. Also nurses sometimes rejected evidence-based patient care protocols in favour of those practices they consider effective based on experience.
(Estabrooks et al., 2005: 468)
Hopefully, this sets the scene for a discussion of how knowledge is managed in particular instances.
Managing knowledge and evidence in practice
Much of the debate in both EBM and EBP utilises an ‘ideal’ model of the linear movement of research findings into practice. But how is knowledge actually managed in practice? In this section, we examine four ‘case studies’ which look at how evidence is used for decision making in practical situations (although not all of them are defined as such by their authors). The first is at the national policy level, the second describes the development of local guidelines by GPs, the third looks at the use of protocols by nurses in a diabetic clinic and a cardiac medical unit, and the fourth looks at the practice level within primary care, mainly focusing on GPs and practice nurses.