Fig. 18.1
EPIS framework illustrating outer and inner context, linkages, EBP fit, and intervention developer
Fig. 18.2
Exploration, Preparation, Implementation, Sustainment (EPIS) Framework illustrating the four implementation phases and outer context and inner context implementation considerations
In order to illuminate this complexity, we provide the following hypothetical example: In the exploration phase, a service system, organization, (e.g., hospital, clinic, community-based provider, etc.) or an individual considers what factors might be important in regard to implementing a practice. For a new empirically supported and approved medication, these might include regulatory and reimbursement constraints (e.g., FDA approval , health plan formularies), training and support for physicians and pharmacists in appropriate prescribing, and potential drug interactions. In the preparation phase, changes in formularies would be made and electronic health records would need to be amended to allow for documenting indications and prescribing. Plans would need to be made for physician/pharmacist training including scheduling, procuring space, and follow-up coaching and support, if needed. In the implementation phase, training begins along with assuring that the medication is now available in formularies and available for patients to obtain from pharmacies. In the sustainment phase, ongoing monitoring would involve oversight of quality of care, appropriateness of prescribing practices, patient adherence, and patient outcomes (including new studies or clinical experience) would be utilized to understand and increase the likelihood of positive outcomes. While this example is oversimplified, it illustrates that there are a number of issues to be considered in order to facilitate effective implementation of an EBP in each EPIS phase.
Implementation Outcomes
Another important consideration is that of “implementation outcomes” that differ from clinical outcomes. Implementation outcomes are unique and distinct from either service system outcomes or clinical treatment outcomes and have been defined as the “…effects of deliberate and purposive actions to implement new treatments, practices, and services” [38]. Implementation outcomes have multiple functions including serving as indicators of implementation success, representing implementation processes (e.g., mediators/moderators of change), and can be intermediate outcomes in treatment effectiveness and quality-of-care research [38]. Implementation outcomes may include factors such as acceptability, feasibility, reach, fidelity, and costs of implementation including those above and beyond the cost of the clinical intervention [39]. There is often a lack of consideration of the costs of implementation that can, in and of itself, limit implementation effectiveness [38, 40]. Figure 18.3 illustrates this distinction noting implementation outcomes including constructs such as feasibility, organization or provider adoption, penetration (i.e., reach) to providers or patients, and costs. These are distinct from Institute of Medicine Standards of Care (e.g., safety, patient-centeredness, etc.) or patient outcomes (e.g., functioning, symptom reduction, etc.). Because it is assumed that a given EBP will be less effective if it is not well implemented, implementation outcomes are important precursors for attaining changes in clinical practice. It is also critically important to distinguish implementation outcomes from other outcomes in hybrid design studies that examine both implementation along with clinical effectiveness or efficacy within the same study [41].
Fig. 18.3
Implementation outcomes as distinct from service outcomes and client outcomes
Consideration of Organizational Context in Implementation
There are a number of common organizational processes likely to be associated with successful implementation [28, 30]. There may be a tendency to focus on processes directly involved in healthcare , including the care recipients (e.g., patients, clients) and care providers (e.g., doctors, nurses, clinicians). However, it is important to consider that healthcare and allied health services (e.g., mental health, social care) are delivered to the public within the larger contexts of work groups, healthcare organizations and wider local or regional health economies, and public health systems of various sizes and scopes [42]. Organizational factors involving stakeholders at multiple levels impact successful organizational change, such as implementation [29, 43, 44]. In fact, it is becoming increasingly clear that organizational and cultural factors are likely to have more impact on successful implementation of EBP compared to individual factors (e.g., clinician age or degree) [45, 46]. Characteristics of implementation settings (e.g., systems, organizations) are critical for effective adoption and use of EBPs and it is often the leaders of systems of organizations who are responsible for developing a context that supports a strategic initiative such as EBP implementation [47].
It follows that evaluating the context within which an EBP will be introduced and embedded is becoming increasingly important. Numerous current efforts focus on developing measures of implementation context to better inform, assess, and facilitate successful EBP implementation. For example, a new measure of implementation leadership identified four distinct leader attributes likely to be important in the implementation process [48]. These include the leader being knowledgeable about the new practice, supportive of team members in implementing the practice, proactive problem-solving implementation issues as they arise, and persevering through the ups and downs of the implementation process [49]. Other measures capture organizational climate that would facilitate EBP implementation and sustainment. Dimensions include providing educational supports and training for EBP, recognition and rewards for excellence in EBP delivery, and selecting team members who are adaptable and have experience with EBPs [50]. Another more general measure of implementation climate assesses the degree to which use of the new practice is expected, supported, and rewarded by the organization [51]. Related to these efforts, there is also interest in, and measures for, assessing organizational readiness for change [52].
Implementation leadership . Connecting these issues, Aarons and colleagues identify how leaders may facilitate the development of organizational climates that support EBP implementation while enumerating important components of the implementation process [28, 30, 32]. An example that highlights literature on organizational climate and implementation climate, and outlines approaches to leadership that can support the development of such climates, involves the implementation of minimally invasive approaches in cardiac surgery teams [53]. Amy Edmondson and colleagues conducted a study of organizational, leadership, and team process among such teams in four different hospitals. They found that leaders who motivated their teams and minimized power differences created a positive psychological safety climate that enabled effective implementation and sustainment of minimally invasive cardiac surgical procedures [54, 55]. This work is consistent with previous work in business settings demonstrating that both management support and organizational context were important in the implementation process [44]. Thus, consistent with generalizability in organizational research, such organizational and leadership approaches to implementation are likely to generalize across health and allied healthcare settings.
Given evidence from observational studies of leadership , novel research is being conducted in the development and testing of implementation strategies to improve leader knowledge, skills, and effectiveness for implementation and sustainment of new innovations. One such approach, the Leadership and Organizational Change for Implementation (LOCI) intervention, combines the training of team leaders in transformational leadership and implementation leadership, while also working with organizations to provide appropriate organizational supports to develop a positive organizational and team climate for implementation [56, 57].
One of the most well-known and most heavily researched approaches to leadership is the full-range leadership model most closely aligned with transformational leadership. This model captures leadership behaviors across the dimensions of individual consideration (understanding the needs of individual team members), intellectual stimulation (engaging team members in problem solving and innovation), inspirational motivation (creating a compelling vision for others to follow), and idealized influence (serving as a role model) [58]. Research has demonstrated that transformational leadership is associated with increased job satisfaction [59, 60]; organizational commitment [61]; and performance for leaders [62, 63], teams [64, 65], and employees [66]. Of specific relevance to this chapter, transformational leadership has been shown to be particularly important for ameliorating the negative impact of organizational stress on work group climate during large-scale behavioral health reform [67] and to support positive attitudes to EBP in statewide system change efforts [68]. Transformational leadership is also associated with successful implementation efforts [69, 70]. New work on implementation leadership has identified four additional leader attributes including knowledgeable leadership (having expertise about the new innovation to be implemented), supportive leadership (supporting staff in their implementation efforts), proactive leadership (i.e., anticipating and solving problems during the implementation process), and perseverant leadership (i.e., persevering through the ups and downs of the implementation process) [49]. For implementation to be successful, team leaders must be proactive and perseverant in communicating their knowledge of and support for EBP while managing resistance to change and communicating the importance of the change being implemented [49, 71–74].
Although much of the literature on leadership has focused on the organizational and work group levels, healthcare organizations can be strongly influenced by the decisions and policies made or instantiated by leaders at the system level. Decisions and policies at the system level can impact funding, disbursement of resources at state and local levels, and policy making to support EBP implementation [75]. Leaders in the Veteran’s Health Administration (VHA) developed The Uniform Mental Health Services Handbook [76] that includes a number of mandates that help create the capacity for medical centers and outpatient clinics to deliver EBPs. The handbook specifies that each VA medical center have an EBP implementation coordinator responsible for educating providers and upper level management about EBP, encouraging providers to attend EBP trainings, working with leaders at the organization and work group levels, and with providers to increase delivery of EBPs in clinical care. Consistent with the EPIS multilevel framework, this approach recognizes that leaders in the outer context (system) can develop policies that impact the inner context (e.g., hospitals, clinics, workgroups, providers).
Leaders at the organization level (e.g., CEOs, presidents, administrators) often are responsible for decisions regarding implementation of new practices and organizational strategies [72, 77]. This level of leadership is often involved in securing funding, which may be related to the decision to implement new practices as funders are increasingly requiring the use of EBPs [8, 78–81]. However, congruence or alignment across levels is an important consideration. The challenge for executive leaders is to involve other levels of leadership and staff to facilitate congruence of mission and process. If not addressed, work group leaders (i.e., those who supervise direct service staff) may not have needed buy-in, organizational support, or an understanding of the rationale behind the decision to implement EBP required to communicate the rationale to their teams [44]. Furthermore, although strategic decisions about implementing EBPs are commonly made by upper level leaders, the effectiveness of implementation efforts is driven by first-level leaders and the providers who deliver the actual services [82–84]. Consequently, the implementation process can be better facilitated if led by “first-level” or team leaders [85].
Although a majority of leadership research has focused on the individual leaders, studies have demonstrated the importance of alignment across multiple levels of leadership [72, 86, 87]. Chreim and colleagues [82] examined system-level factors that influenced implementation processes during the transformation of healthcare service delivery to a new model within one Canadian province. They found that implementation was supported through agreement, participation, commitment, and congruence of support at all levels of leadership. At the work group level, the degree to which providers agree about the strategy or change being implemented predicts implementation success [88]. Similarly, the aggregate of multiple levels of leadership predicts organizational outcomes as a function of strategic implementation efforts [72]. This interplay between different leadership levels has been identified as a key factor in the implementation of a multicenter clinical quality improvement intervention across multiple hospital medical wards in the UK [89]. The intervention consisted of team-based clinical safety briefings, designed to embed proactive risk surveillance within routine, daily ward work. Through a 20-month implementation and evaluation period, the research team reported a shift in focus from the frontline healthcare providers to the middle- and higher level organizational management structures, as these emerged as critical determinants of the implementation effectiveness, and, in turn, its clinical effectiveness on care processes and patient outcomes. We propose that such congruence and alignment is important because it facilitates a positive implementation climate among stakeholders [47].
Implementation of Surgical Checklists
Many, if not all, elements of implementation research and also practice that we outlined earlier are illustrated in the recent trajectory within hospital-based care of checklists in surgical care. The concept of avoidance or reduction in postoperative complications is likely as old as surgery itself—see for example efforts by Codman [90] in early twentieth century to systematically record and measure surgical outcomes. However, the political and policy drive to improve the safety and quality of surgical care via a range of evidence-based interventions flourished in the past two decades—as it did for all of medicine. Sparked by the influential report by the Institute of Medicine ‘To Err is Human’ [91], initial efforts to improve safety concentrated on establishing the epidemiology of errors, lapses, and patient safety incidents, as well as understanding their nature. We now know that, on average, 1 in 10 patients admitted to hospital will suffer at least one adverse event as a result of their care [92]. Although the majority of adverse events are minor, some lead to serious injury or death [93]. Approximately 60 % of them on average occur within surgical care [94]. The importance of teamwork in healthcare is firmly established, with recognition that many high-profile failures were due in large part to substandard teamwork, including in the highly complex operating room environment [95, 96]. In recent years, the focus has shifted from understanding, to intervening and preventing—and this is when aviation-styled checklists were first implemented in surgery.
Early Support for Implementation of Surgical Checklists
The current widespread prevalence and ongoing discussion of surgical checklists is due in large part to a large-scale international study , which evaluated the clinical efficacy of a 19-item checklist developed to address the Second Global Patient Safety Challenge: Safe Surgery Saves Lives, as part of a World Health Organization initiative [97]. The WHO Checklist consists of three parts, the first applied before the patient is anaesthetized (‘Sign-In’), the second immediately prior to surgical incision (‘Time-Out’), and the final one immediately prior to procedure completion (‘Sign-Out’). The subsequent evaluation of this checklist across eight countries worldwide, including both developed and developing world economies, provided startling findings: across study hospitals, the WHO Checklist reduced mortality by almost 50 %, whereas overall complication rate decreased by over a third [98]. The WHO Checklist became an instant success story—within weeks of publication of the study results in the New England Journal of Medicine, the National Patient Safety Agency (NPSA) in England mandated use of a slightly modified version of this checklist across all surgical procedures [99]. Subsequent patient safety campaigns in England (e.g., Patient Safety First campaign [100]) and internationally included this checklist almost by default, as a flagship intervention for improvement of surgical care. Widespread dissemination of surgical checklists was indeed intended: a checklist implementation manual was produced by the developer team [97], followed by video-based examples produced by the NPSA in England showing how to do (and not to do) the Checklists in the OR [101].
Fading Evidence for Implementation of Surgical Checklists
A flurry of studies followed, included randomized trials [102]—using this and other checklists in surgical pathways. But the findings were not as unequivocal—reductions in mortality in particular were not found [103]. Explanatory hypotheses that proposed that checklists achieve their clinical efficacy via improved team and safety culture remain controversial, with some studies supporting these hypotheses [104], but others not finding evidence for such links [105]. However, the biggest ‘upset’ in the checklists evidence base to date is the largest implementation evaluation—across the Ontario province in Canada. This remains the largest regional implementation of the WHO Checklist in a study of routine surgical care of over 215,000 patients in Canada, where no reduction in mortality or morbidity indicators was found [106]. Surprise was expressed at these results, which were speculatively attributed to the likely nonuse of the checklist in practice [107]—a likely valid explanation but one that does not address the barriers to change of culture and behaviors [108].
Incomplete Plan for Implementation of Surgical Checklists
What is the catch here? The answer is, at least partly, certainly within incomplete and ineffective methods for implementation of checklists. As in many areas of medicine, efficacy evidence normally stems from research-funded studies, where interventions under scientific scrutiny are given every chance of being efficacious: their implementation is careful, well thought-out, carried out by motivated staff with time dedicated to deploy them. Yet, routine clinical practice typically does not replicate the resource-rich, highly motivated, expert research setting of a trial. Further, what the initial success story of the WHO Checklists may have caused is a sense of simplicity and hope that implementation of an evidently simple intervention such as a checklist is vastly cost effective, as the costs are practically zero. Unfortunately for patients, this view is rather naïve—as it fails to take into account the vagaries of implementing what is, in many ways, a behavior change intervention within a highly complex sociotechnical environment (the OR), rife with professional identities, team dynamics, and often competing organizational pressures (for safety and productivity) [109]. The signs of an overall naïve approach were there from the start. An early analysis of how the WHO checklist had been implemented in England revealed significant variations between teams and ORs [110]. Use of the checklist in this study diminished when the research team withdrew from the clinical areas; further underutilization of the intervention was attributed to cultural, organizational, and practical barriers. Leadership was recognized as a key strategy for improved implementation, both at organizational level but also at the operational level, through checklist ‘champions.’ Although qualitative implementation analyses such as this one are hard to repeat longitudinally for direct comparison, more recent studies using standardized observational assessments in the OR while the checklist is being carried out have confirmed the same pattern [111, 112].
The problem may in fact have wider implications. Naïve portrayal of checklists in surgery presents them as the ‘silver bullet’ that can cost effectively improve the way a team communicates and shares information and thus improve basic care processes (including timely administration of antibiotics, appropriate deep vein thrombosis prophylaxis, robust patient identity checks and similar) and ultimately patient outcomes. This may indeed happen in some cases—but it likely will not happen when safety lapses and quality gaps are underlined by deeper team and organizational problems [113, 114]. The narrative for both the effectiveness and also the implementation of checklists in complex clinical environments has thus been oversimplified in a manner that is not conducive to enhancing our understanding of exactly how such interventions actually work when they do, and why they fail to bring about improvement when they do not [115. The comparison of surgery with commercial aviation, where some of the fascination with checklists in healthcare can be traced, has often been accordingly simplistic: aviation did not become safer just because pilots and crews started relying more on checklists in the past few decades. Other factors contributed to safety, in a synchronized manner; these include technological improvement , improved skills training, error and incident reporting structured, and safety data sharing at international level, i.e., safety in aviation progressed at a systemic, industry-wide level [115, 116]. Checklists can certainly enhance safety but likely not as a single isolated safety intervention [117]. With simplistic views of checklists rather prevalent, perhaps not surprisingly detailed implementation analyses of checklists remain scarce—in the largest and most detailed one to date that we are aware of, of the national implementation of the WHO Checklist across English hospitals, a host of factors were identified [118]. These cover the full range of implementation strategies mentioned in earlier sections of this chapter and reveal interactions between them and significant contextual influences.
Summary and Challenges and Future Directions for Implementation Science Research
Implementation science is playing a crucial role in reducing the research-to-policy and research-to-practice gaps with the ultimate intention of advancing health outcomes. However, significant challenges present when completing implementation science research. Consistent with issues facing implementation science globally, The US National Institutes of Health Fogarty International Center (FIC) [119] has outlined challenges facing the field of implementation science research: (1) implementation science is a new, developing field; (2) effective implementation requires a multidisciplinary, collaborative approach; and (3) implementation strategies requires rethinking scientific rigor and the importance of mixed methodologies. These three challenges are described later. The FIC challenges are followed by a discussion of future directions and global initiatives for the field of implementation science.
- 1.
New, developing field . The FIC recognizes the potential for implementation research in improving program quality and performance through the use of scientific methods. However, implementation science as a field is relatively new and still in development. There are many efforts to improve implementation of EBPs that utilize a variety of frameworks, a number of constructs hypothesized to affect successful implementation, and many measures of these constructs. With so many efforts to improve implementation of EBPs, there is little consensus on optimal scientific methodology for implementation science research [120–122]. In fact, there is debate regarding the “best” strategies for successful implementation of EBPs [36]. Recent implementation science research has begun addressing this debate. For example, Brown and colleagues [123] directly compared two strategies for implementing one EBP across two states. This study was also successful through their use of the Stages of Implementation Change (SIC) measure that enabled the measurement of implementation process across multiple stages, multiple milestones, and multiple levels of participants. By using this measure, the authors assessed progress in EBP implementation or lack thereof. The authors introduce plans for future advances toward addressing this debate. Through a recently funded R01, Saldana will adapt the SIC to evaluate common/universal implementation activities that are utilized across EBP implementation strategies, and to examine whether these items are equally important in achieving implementation success, and whether stages of implementation are stable across EBPs despite differences in activities defining SIC stages [124]. As Brown et al. [123] have illustrated, continued coordination and communication of efforts for broader dissemination of results, best practices, and lessons learned are suggested for future implementation science research.
- 2.
Interdisciplinary —multidisciplinary and collaborative approach. The FIC highlight the importance of inter/multidisciplinary collaboration for effective implementation. A number of approaches have been utilized including community-based participatory research [125], community-participatory partnered research [126], and collaborative approaches such as the Institute for Healthcare improvement (IHI) Breakthrough Series [127], though there are few established communication channels and forums for such communication. As discussed in this chapter, alignment across levels within and between organizations is crucial for establishing an organizational climate in support of EBP implementation. There is often a gap between the expectations of researchers who generate and report implementation science results and practitioners who implement results. Congruence between leaders at the organization level (e.g., CEOs, presidents, administrators ), frontline providers in the trenches of delivering services, and the implementation science researchers will facilitate successful implementation of EBPs.
- 3.
Rethinking scientific rigor . The FIC and the US Office of Behavioral and Social Sciences Research stresses the importance of using mixed methodology (qualitative and quantitative methods) [128], and methods from fields such as economics and business, to guide implementation strategies and evaluate the implementation of health interventions [129]. Scientific rigor has traditionally referred to random assignment in highly controlled laboratory settings. In real-world settings where random assignment is not always possible, and highly controlled laboratory settings do not provide the context targeted through implementation science research, alternative approaches are needed while balancing and maximizing rigor in scientific research. Mixed methodology provides an avenue for conducting rigorous implementation science research that can be done in the context of an RCT or other design. Other quasi-experimental approaches one may consider in the conduct of implementation science research include regression discontinuity designs, interrupted time series designs, multiple imputation techniques, and propensity score analyses. Type I, Type II, and Type II hybrid implementation science research designs that combine implementation and effectiveness questions and outcomes in the same study are increasingly being used while maintaining scientific rigor [41].
Future Directions and Global Initiatives for the Field of Implementation Science
There are several considerations of future directions and global initiatives for the field of implementation science, including (1) identifying and classifying implementation strategies, (2) mapping the similarities and differences between implementation science and quality improvement research, (3) creating a platform for implementation research via the Global Implementation Initiative, (4) providing training for dissemination and implementation research. These are discussed in the following paragraphs.
- 1.
Implementation strategies . Recent work on identifying and classifying implementation strategies has helped both researchers and practitioners to consider multiple approaches to consider to support EBP implementation [130, 131]. Beyond a review of implementation strategies, Powell and colleagues have developed and make recommendations regarding methods for identifying, selecting, and tailoring implementation strategies for use in various health and allied health settings [132, 133]. This approach combined with the use of an appropriate implementation framework can provide guidance in the implementation process through progression through the four implementation phases [37].Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree