Implementing Evidence-Based Practice


30116


IMPLEMENTING EVIDENCE-BASED PRACTICE


LISA J. HOPP


INTRODUCTION


The use of evidence-based practice (EBP) has become the expected standard in healthcare; yet, in spite of decades of high-quality health care research and a growing evidence base, its impact at the point of care remains inconsistent. Fragmented care, health inequities, and regional variability in healthcare practices and outcomes coupled with a recognition of the highest care and mediocre outcomes have created a demand for using evidence to improve health and healthcare (Dzau et al., 2017; see also Institute of Medicine [IOM], 2008). In the United States, health and healthcare disparities persist while spending and estimated waste both increase. In 2015, total spending on healthcare was estimated at $3.2 trillion with 30% of it committed to waste or excess including unnecessary treatment, inefficiencies, missed prevention, and even fraud (Dzau et al., 2017). Clearly, despite high-quality research availability this overall health picture persists. In 2006, Graham et al. reported that in both the United States and the Netherlands, 30% to 40% of patients do not receive evidence-based care, and 20% to 25% of patients receive unneeded or potentially harmful care. Similarly, Sanchez et al. (2020) cited a near doubling in the use of statins in one area of Spain despite the limited primary prevention value for cardiovascular disease in low-risk patients. Foy et al. (2015) emphasized that the lack of uptake of what we know from research produces research waste and an “invisible ceiling on the potential for research to enhance population outcomes” (p. 2).


In an effort to improve patient care, government bodies and individual organizations have focused time, attention, and resources on compiling and evaluating research findings, as shown by the increase in published systematic reviews. The Cochrane Database of Systematic Reviews (CDSR) alone has published 7,500 reviews with plain language summaries in 14 languages and the organization has 11,000 members, 68,000 supporters from more than 130 countries (Cochrane, 2020). Other organizations contribute to the growing volume of systematic reviews through their libraries and evolving methodologies to conduct systematic reviews such as the Joanna Briggs Institute (JBI). The findings from these reviews on topics relevant to preventive, acute, and chronic healthcare have been used to develop behavioral interventions, evidence-based healthcare programs, and evidence-based guidelines and protocols. However, despite these efforts, use of EBPs at the point of care remains inconsistent (Campbell et al., 2015; Foy et al., 2015).


There is a need for focused research to identify effective strategies to increase the use of evidence-based programs, guidelines, and protocols and how to incorporate the evidence into health systems with the goal of better outcomes and patient experiences at a better value (Whicher et al., 2018). Some questions that point to how to improve the way we use evidence for better health 302and healthcare are: What strategies are effective in increasing the use of EBP? Are these strategies effective in all settings (e.g., acute care, long-term care, school health, and primary care)? Who should provide the interventions through what channels? Are they effective with all end users (e.g., nurses, physicians, pharmacists, housekeeping staff)? Are they effective with different evidence-based healthcare practices (prescribing drugs, handwashing, fall prevention)? What strategies work to de-implement or undo ineffective practices? In summary, we need to know what strategies to use and in what setting and by and with whom for varying topics when implementing EBPs in an organization.


This chapter describes the field of implementation science, a relatively new area of study that is addressing these questions through research. Included are emerging definitions, an overview of promising models, and the current state of the science, and concludes with suggestions for future research needed to move the field forward.


DEFINITION OF TERMS


Based on the National Institutes of Health program announcement, implementation is “the process of putting to use or integrating evidence-based interventions within a setting” (Rabin et al., 2008, p. 118). According to the editors of the periodical Implementation Science, implementation research is:


the scientific study of methods to promote the systematic uptake of evidence-based interventions into practice and policy and hence to improve health. In this context, it includes the study of influences on professional, patient and organizational behavior in healthcare, community or population contexts. (Foy et al., 2015, p. 2)


In the “about” section of the journal pages, the editors include “de-implementation” as part of implementation science. This relates to the study of how to remove interventions of low or no benefit from routine practice (Implementation Science, 2020; Norton & Chambers, 2020). It includes research that helps us understand how to move knowledge of the evidence-based healthcare practices into routine use (Brownson et al., 2012). It includes research to (a) understand context variables that influence adoption of EBPs, and (b) test the effectiveness of interventions to promote and sustain use of evidence-based healthcare practices. Implementation science then relates to both the systematic investigation of methods, interventions, and variables that influence adoption of evidence-based healthcare practices and the organized body of knowledge gained through such research (Foy et al., 2015).


Implementation research is a young science, so standardized definitions of commonly used terms are emerging (Canadian Institutes of Health Research [CHIR], 2020; Graham et al., 2006; McKibbon et al., 2010). This is evidenced by differing definitions and the interchanging of terms that, in fact, may represent different concepts to different people. Adding to the confusion, terminology may vary depending on the country in which the research was conducted. Graham et al. (2006) reported identifying 29 terms in nine countries that refer to some aspect of translating research findings into practice. For example, researchers in Canada may use the terms research utilization, knowledge-to-action, integrated knowledge translation, knowledge transfer, or knowledge translation interchangeably, whereas researchers in the United States, the United Kingdom, and Europe may be more likely to use the terms dissemination research, implementation, or research translation to express similar concepts (Canadian Institutes of Health Research, 2020; Colquhoun et al., 2014; U.S. Department of Health and Human Services, 2018). 303McKibbon et al. (2010) collected terms across a broad but purposeful set of literature sources during 1 year (2006). They found more than 100 terms that they regarded as equivalent or closely related to knowledge translation. They further analyzed the terms to try to find words that more likely differentiated knowledge translation literature from nonknowledge translation literature. Table 16.1 provides examples of currently used definitions of common terms describing concepts related to implementation science. Although these definitions provide an explanation of terms used in articles about implementation science, terms such as implementation, dissemination, research translation, and knowledge transfer may be used interchangeably, and the reader must determine the exact meaning from the content of the article. As implementation science progresses, there is increased emphasis on the dynamic nature of knowledge translation and that it involves reciprocal, bidirectional interaction among knowledge creators, translators, and users.

































































TABLE 16.1 Definitions Associated With Implementing Evidence in Practice



DEFINITION


SOURCE


Diffusion


“The process by which an innovation is communicated through certain channels over time among members of a social system.”


Rogers (2003)


Dissemination


“An active approach of spreading evidence-based interventions to the target audience via determined channels using planned strategies.”


Rabin et al. (2008)


Dissemination research


“The scientific study of targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to understand how best to spread and sustain knowledge and the associated evidence-based interventions.”


U.S. Department of Health and Human Services (2018), background


Implementation research


“The scientific study of methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice” in order to improve the quality and effectiveness of health care.


Graham et al. (2006)


Implementation science


“The investigation of methods, interventions, and variables that influence adoption of evidence-based health care practices by individuals and organizations to improve clinical and operational decision making and includes testing the effectiveness of interventions to promote and sustain use of evidence-based healthcare practices.”


Titler, Everett, & Adams (2007)


“All aspects of research relevant to the scientific study of methods to promote the uptake of research findings into routine health care in both clinical and policy contexts.”


Graham et al. (2006)


Integrated knowledge translation


“A way of doing research that involves decision makers/knowledge-users—usually as members of the research team— in all stages of the research process.”


Canadian Institutes of Health Research (2020)


Knowledge broker


“An intermediary who facilitates and fosters the interactive process between producers (i.e., researchers) and users (i.e., practitioners, policymakers) of knowledge through a broad range of activities.”


Rabin & Brownson (2012)


304Knowledge to action


A broad concept that encompasses both the transfer of knowledge and the use of knowledge by practitioners, policy makers, patients, and the public, including use of knowledge in practice and/or the decision-making process. The term is often used interchangeably with knowledge transfer or knowledge translation.


Straus et al. (2013)


Knowledge transfer


“Knowledge transfer is used to mean the process of getting knowledge used by stakeholders.”


“The traditional view of ‘knowledge transfer’ is a unidirectional flow of knowledge from researchers to users.”


Graham et al. (2006)


Knowledge translation


“Knowledge translation is the exchange, synthesis and ethicallysound application of knowledge within a complex system of interactions among researchers and users to accelerate the capture of the benefits of research for Canadians through improved health, more effective services and products, and a strengthened health care system.”


Canadian Institutes of Health Research (2016)


Knowledge user


“An individual who is likely to be able to use research results to make informed decisions about health policies, programs and/or practices. A knowledge user’s level of engagement in the research process may vary in intensity and complexity depending on the nature of the research and on his/her information needs. A knowledge user can be, but is not limited to, a practitioner, a policy maker, an educator, a decision maker, a health care administrator, a community leader or an individual in a health charity, patient group, private sector organization or media outlet.”


Canadian Institutes of Health Research (2020)


Research utilization


“That process by which specific research-based knowledge (science) is implemented in practice.”


Estabrooks et al. (2003)


Translation research


The process of turning observations in the laboratory, clinic, and community into interventions that improve the health of individuals and the public—from diagnostics and therapeutics to medical procedures and behavioral changes.


National Center for Advancing Translational Sciences (2018)


The interchange of terms leads to confusion about how implementation research fits into the broader picture of conduct and use of research. One way to understand this relationship is to compare implementation research and the commonly used scientific terms for the steps of scientific 305discovery: basic research, methods development, efficacy trials, effectiveness trials, and dissemination trials (Sussman et al., 2006). For example, the term translation denotes the idea of moving something from one form to another. The National Institutes of Health (NIH) used this term to describe the process of moving basic research knowledge that may be directly or indirectly relevant to health behavior changes into a form that eventually has impact on patient outcomes (National Center for Advancing Translational Sciences [NCATS], 2018). The NIH built a “roadmap” that identified two gaps in translation. The first exists between discovery of biochemical and physiological mechanisms learned at the bench and the second between the bench and development of better diagnostic and treatment solutions (Zerhouni, 2003). Westfall et al. (2007) suggested that the gap between bench and human trials be identified as “T1” and that the research designs that best fit discovery would be phase 1 and phase 2 clinical trials as well as case series. They named the gap between human clinical research and clinical practice “T2” and recommended that this type of translational research needs to go beyond traditional effectiveness trials (phase 3) to more pragmatic approaches or “practice-based research.” They further identified T3 as the next gap requiring effective dissemination and uptake through synthesis, guideline development, and implementation research in order to achieve the goal of moving knowledge into routine practices, systems, and policies. Finally, T4 research has emerged to evaluate the impact of implementing discovery in populations (Brownson et al., 2012). Since this ground-breaking work, the movement and translation knowledge from research is thought of being highly dynamic, with multiple opportunities for data gathering and analysis, dissemination, and interactions among phases (IOM, 2013; NCATS, 2018). As we learn more about the complexities of moving and bridging the “know–do” gap, undoubtedly the science and underpinning language and theory will grow. Rabin and Brownson (2012) provided a summary of terms used in what they call “dissemination and implementation” activities as one reference guide to emerging terminology. Building a common taxonomy of terms in implementation science is of primary importance to this field and must involve input from a variety of stakeholders and researchers from various disciplines (e.g., healthcare, organizational science, psychology, and health services research; IOM, 2007b).


IMPLEMENTATION MODELS


EBP models began to emerge in the nursing literature as focus shifted from research utilization to broader concepts. These earlier models guided the overall process of EBP (Rosswurm & Larrabee, 1999; Stetler, 2001; Titler et al., 2001). They include implementation as a concept and the process reflected a problem-solving process much like the nursing process, including identifying clinical problems, collecting and analyzing evidence, making the decision to use the evidence to change practice, and evaluating the change after implementation. But little detail and/or guidance was provided regarding the actual process or “how” of implementation. Users of these models were told to simply “implement,” a directive that fails to take into account the complexity of the process of implementation. Implementing and sustaining change is a complex and multifaceted process, requiring attention to both individual and organizational factors (Grimshaw et al., 2012).


Many experts believe that using a model specifically focused on implementation provides a framework or mental map for identifying factors that may be pertinent in different settings or circumstances, and allows for testing and comparing tailored strategies for individual settings. The hope is this will allow for some generalization of results (Esmail et al., 2020; Kirk et al., 2016). Although no single model may apply to all situations, an effective model must be sufficiently specific to guide both implementation research and implementation at the point of care but general 306enough to cross various populations. Birken et al. (2017) surveyed implementation scientists to determine what factors influenced their choice of theory to guide their implementation studies. Two hundred twenty-three implementation scientists from 12 countries responded to the survey. The investigators found that this group used 100 different theories. The factors that most influenced their choice were: analytic level (i.e., individual, organization, system); logical consistency and plausibility (e.g., relationships that reflected face-validity); empirical support; and description of a change process. They concluded that implementation scientists vary greatly in the way they make decisions on how to select a theoretical approach and that investigators often use theories rather superficially, not at all, or even misuse them. Therefore, they recommended that implementation scientists represent their reasons for their selection of theory more transparently. In 2018, Birken et al. developed a tool called T-CaST (Implementation Theory Comparison and Selection Tool) to guide selection using 16 specific criteria in four domains: applicability, usability, testability and acceptability (online access is available at impsci.tracs.unc.edu/tcast).


Several attempts have been made to search for and organize the multitude of theories, models, and frameworks that are used to promote changes in practice or behavior. Just as the terms for implementation science are often used interchangeably, although technically they are not the same, the terms conceptual frameworks/models and theoretical frameworks/models are often used interchangeably, although they differ in their level of abstraction (Nilsen, 2015; Titler, 2018). In this discussion, the term model will be used as a general term, unless the model is specifically identified as a theory or conceptual framework by its creator. A model, then, for our purposes, is a set of general concepts and propositions that are integrated into a meaningful configuration to represent how a particular theorist views the phenomena of interest, in this case the implementation of evidence into practice (Fawcett, 2005).


Although an extensive review of all models suggested for possible use in implementation science is beyond the scope of this chapter, several promising models are discussed in some detail. For a summary of additional models, see the review by Grol et al. (2007) who reviewed a wide variety of models relevant to quality improvement and the implementation of change in healthcare. Bucknall and Rycroft-Malone’s (2010) monograph focused on models highly relevant to nursing practice. Tabak et al. (2012) reviewed 61 different models and most recently Nilsen (2015) analyzed and categorized existing implementation theories, models, and frameworks according to their purposes and aims and proposed a taxonomy to allow analysis and differentiation of models, frameworks and theories relevant to implementation science.


In this chapter, one theory and two frameworks will be reviewed. The first, Rogers’s Diffusion of Innovation Theory, is ubiquitously present across disciplines and incorporated in many other models and frameworks. The second, the Promoting Action on Research Implementation in Health Services (PARIHS) and its successor, the Integrated PARIHS (i-PARIHS) was selected because of its usability and ease of adaptation to nursing practice. Finally, the Consolidated Framework for Implementation Research (CFIR) was chosen as it is a meta-synthesis of many models and structures the what, where, who, and how of implementation while pointing its users to specific implementation strategies. All three of these approaches recognize that implementation is complex, nonlinear and ultimately messy.


DIFFUSION OF INNOVATIONS


Probably the most well-known and frequently used theory for guiding change in practice is Everett Rogers’s (2003) Diffusion of Innovations Theory. In the theory, he proposed that the rate 307of adoption of an innovation is influenced by the nature of the innovation, the manner in which the innovation is communicated, and the characteristics of the users and the social system into which the innovation is introduced. Rogers’s theory has undergone empirical testing in a variety of different disciplines (Barta, 1995; Feldman & McDonald, 2004; Greenhalgh et al., 2004; Lia-Hoagberg et al., 1999; Michel & Sneed, 1995; Rogers, 2003; Wiecha et al., 2004).


According to Rogers (2003), an innovation can be used to describe any idea or practice that is perceived as new by an individual or organization; evidence-based healthcare practices are considered an innovation according to this theory. Rogers acknowledges the complex, nonlinear interrelationships among organizational and individual factors as people move through five stages when adopting an innovation: knowledge/awareness, persuasion, decision, implementation, and evaluation, and focuses on characteristics of the innovation such as relative advantage, compatibility, complexity, trialability, and observability that influence the probability of change. These elements are incorporated in many other models and frameworks, including i-PARIHS and the CFIR among many others.


The PARIHS/i-PARIHS Framework


The PARIHS framework is a promising model proposed to help practitioners understand and guide the implementation process. The originating authors began to develop the framework inductively in 1998 as a result of work with clinicians to improve practice. Subsequently, the framework has undergone concept analysis and has been used as a guide for structuring research and implementation projects at the point of care (Ellis et al., 2005; Wallin et al., 2005, 2006), evaluation (Kitson et al., 2008), and further development of the role of facilitation (Harvey & Kitson, 2015). This framework proposes that implementation is a function of the relationship among the nature and strength of the evidence, the contextual factors of the setting, and the method of facilitation used to introduce the change. Kitson and colleagues suggested that the model may be best used as part of a two-stage process: First, the practitioner uses the model to perform a preliminary evaluation measure of the elements of the evidence and the context, and, second, the practitioner uses the data from the analysis to determine the best method to facilitate change. In 2008, Kitson et al. evaluated the progress of the framework’s development and recognized that while evidence supported the conceptual integrity, face, and content validity, further work remained. Subsequently, Helfrich et al. (2010) synthesized 24 published papers; they identified that investigators most commonly used the framework as a heuristic guide and recommended more prospective implementation studies that would clarify the relationship among the major subconcepts and implementation strategies. Since their synthesis, 40 papers on the model have been published but few included prospective application.


Based on this evaluative work, Harvey and Kitson (2015) developed a guide for facilitation that revisits the model based on the prior evaluative findings. They call the revised framework “i-PARIHS” with a shift in focus on innovation and its characteristics (see Rogers’s attributes) where available research evidence informs the innovation. They have expanded upon the notion of context to include inner (the immediate setting such as the unit or clinic) and outer (the wider health system including the policies and regulatory systems) settings. The revision also refocuses on facilitation as the “active ingredient” rather than one of the elements of implementation. In addition, successful implementation is conceptualized more broadly to include achievement of the goals, the uptake, and the embedding of the innovation in practice; stakeholders are engaged and motivated to own the innovation, and variation across settings is minimized (Harvey & Kitson, 2015). Thus, they have revised the formula: SI = Facn (I + R+ C) where SI is successful 308implementation, Facn is facilitation, I is innovation, R is recipients, and C is context. They include Rogers’s characteristics of the innovation as part of what needs to be considered in implementation. They added the construct of recipients, the targets of the implementation after feedback from the model’s users and reviewers, and define characteristics such as the motivation, values and beliefs, skills, knowledge, and so forth. Finally, the active ingredient—facilitation—includes both role and strategies. See Harvey and Kitson (2015) for a complete explanation of the framework as well as application cases. Since the updated framework’s publication in 2015, there have been 68 citations in the database Scopus identifying the use of PARIHS or i-PARIHS, but most referred to the original model.


Strengths of the model include an explicit recognition that implementation is nonlinear and complex; the framework provides a pragmatic map to make sense of the inherent complexity and the design of tools and methods to diagnose and design interventions. In addition, users can flex the facilitation process to the level of evidence and the level of context support available, making it adaptable to various situations.


Consolidated Framework for Implementation Research (CFIR)


In 2009, Damshroder et al. recognized that the plethora of models, frameworks, and theories was potentially hindering development of a coherent approach to conducting implementation research and the use of this knowledge. They conducted a broad review of theories used to guide implementation. The outcome was a meta-theoretical synthesis from 19 commonly cited implementation models to net a structure of constructs from existing theories. While they organized these constructs into a schema, they did not attempt to depict interrelationships or generate hypotheses. Instead, they aimed to produce a common language and comprehensive set of constructs that researchers use as “building blocks” to generate testable models and hypotheses using this common structure.


The CFIR is composed of five domains that include the intervention, inner and outer settings, the people involved, and the process of implementation (Damschroder et al., 2009). Taking some liberty to simplify, the framework is organized by the “what” (intervention and its characteristics), “where” (inner and outer settings or context), “who” (individuals involved and their characteristics), and the “how” (the process of implementation). Within these domains, they further defined constructs (Table 16.2). (The definitions of each in construct can be found in the electronic supplements to Damschroder et al’s original article.)


A decade after the CFIR was published, Kirk et al. (2016) conducted a systematic review of studies that used the CFIR to determine how it has been used and its impact on implementation science. They found that investigators cited the framework in 429 publications but they excluded nearly 83% of them because the researchers did not use the model in a meaningful manner (incorporating the framework in the methods and reporting of the findings). The remaining 26 studies used the framework to varying degrees of depth with the majority using it to guide data analysis.


IMPLEMENTATION STRATEGIES


When implementing EBP, as previously stated, it is helpful to use a model to provide structure and guidance in targeting interventions to increase the rate of adoption. Fundamentally, the questions that any implementation model or framework must address is: What is to be implemented? To and by whom should the knowledge be translated? Where should it be translated? How should it 309be translated? To what effect? (Damschroder et al., 2009; Grimshaw et al., 2012; Harvey & Kitson, 2015). As recently as 2011, the IOM (now the National Academy of Medicine) recommended using multifaceted strategies (IOM, 2011). However, Squires et al. (2014) conducted an overview of 25 systematic reviews that compared a single component to multi-faceted (two or more) components and they failed to find that multifaceted strategies were superior to single strategies. Nonetheless, multiple factors (e.g., the context, individuals, the intervention, types of barriers and facilitators) may necessitate selecting more than one strategy.
























TABLE 16.2 Consolidated Framework for Implementation Research Elements


DOMAIN


CONSTRUCTS


Intervention characteristics


• Intervention source, evidence strength, and quality


• Relative advantage


• Adaptability


• Trialability


• Complexity


• Design quality and packaging


• Cost


Outer setting


• Patient needs and resources


• Cosmopolitanism


• Peer pressure


• External policy and incentives


Inner setting


• Structural characteristics


• Networks and communication


• Culture


• Implementation climate


° Tension for change


° Compatibility


° Relative priority


° Organizational incentives and rewards


° Goals and feedback


° Learning climate


• Readiness for implementation


° Leadership engagement


° Available resources


° Access to knowledge and information


Characteristics of the individual


• Knowledge and beliefs about the intervention


• Self-efficacy


• Individual stage of change


• Individual identification with the organization


• Other personal attributes


Process


• Planning


• Engaging


° Opinion leaders


° Formally appointed internal implementation leaders


° Champions


° External change agents


• Executing


• Reflecting and evaluating


Source: Adapted from Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50. https://doi.org/10.1186/1748-5908-4-50


310Innovation: The “What”


If an innovation is to be “evidence-based,” it should be based on the best available evidence. “Available” implies that an exhaustive search has revealed all the evidence that exists and “best” implies that someone has appraised the evidence and judiciously selected it as the most valid and reliable information. Grimshaw et al. (2012) wrote, “the basic unit of knowledge translation should be up-to-date systematic reviews or other syntheses of global evidence” (p. 3). They based their argument on the work of Ioannidis and colleagues who found that studies of particular treatments or associations showed larger, sometimes extravagant effect size in the initial trials that diminished with replications (Grimshaw et al., 2012).


Even when ideal evidence exists, the nature or characteristics of the EBP influence the rate of adoption. However, the attributes of an evidence-based healthcare practice as perceived by the users and stakeholders are not stable but change depending on the interaction between the users and the context of practice (Damschroder et al., 2009; Harvey & Kitson, 2015). For example, an identical guideline for improving pain management may be viewed as pertinent and not complex by users in one setting (e.g., labor and delivery) but as less of a priority and as difficult to implement by staff in another unit (e.g., geropsychiatry). Although a positive perception of the EBP alone is not sufficient to ensure adoption, it is important that the information be perceived as pertinent and presented in a way that is credible and easy to understand (Greenhalgh et al., 2004, 2005; Harvey & Kitson, 2015). Some characteristics of the innovation known to influence the rate of adoption are the complexity or simplicity of the evidence-based healthcare practice, the credibility and pertinence of the evidence-based healthcare practice to the user, and the ease or difficulty of assimilating the change into existing behavior (Harvey & Kitson, 2015; Rogers, 2003). However, although these characteristics are important, the adopter’s decision to use and sustain an EBP is complex. This is evident in the continued failure to achieve a consistent high rate of adherence to handwashing recommendations in spite of the relative simplicity of the process and the knowledge of its pertinence to optimal patient outcomes. EBP guidelines are one tested method of presenting information (Flodgren et al., 2016; Grimshaw et al., 2004; Guihan et al., 2004; IOM, 2011). EBP guidelines are designed to assimilate large, complex amounts of research information into a usable format (Grimshaw et al., 2004; Lia-Hoagberg et al., 1999) and are ideally based on a synthesis of systematic reviews (Grimshaw et al., 2012; IOM, 2008). Appropriately designed guidelines are adaptable and easy to assimilate into the local setting. Empirically tested methods of adapting guidelines and protocols to the local practice setting include practice prompts, quick reference guides, decision-making algorithms, computer-based decision support systems, and patient-mediated interventions (Eccles & Grimshaw, 2004; Feldman et al., 2005; Fønhus et al., 2018; Grimshaw, Eccles, Thomas, et al., 2006; Wensing et al., 2006). Flodgren et al. (2016) conducted a systematic review of the effect of tools developed and disseminated to accompany a practice guideline. While this science is young and the studies were too heterogeneous to conduct a meta-analysis, tools such as paper-based education targeting barriers to uptake, reminders, and order forms improved adherence to the guideline.


Communication: To Whom and By Whom and How?


The method and the channels used to communicate with potential users about an innovation influence the speed and extent of adoption of the innovation (Manojlovich et al., 2015; Rogers, 2003). Communication channels include both formal methods of communication that are established in the hierarchical system and informal communication networks that occur spontaneously throughout the system. Communication networks are interconnected individuals who are linked 311by patterned flows of information (Rogers, 2003). Manojlovich et al. (2015) make the argument that communication should be viewed as transformational and able to cause change in the context of implementation rather than viewed simply as transactional.


Mass media communication methods are effective in raising awareness at the population or community level; for example, of public health issues such as the need for immunizations and screenings and include the use of television, radio, newspapers, pamphlets, posters, and leaflets (Bala et al., 2017; Carson-Chahhoud et al., 2017; see also Grilli et al., 2002; Rogers, 2003

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 17, 2021 | Posted by in NURSING | Comments Off on Implementing Evidence-Based Practice

Full access? Get Clinical Tree

Get Clinical Tree app for offline access