Implementation Science



 


3






Implementation Science


Marita Titler and Clayton Shuman


OVERVIEW


Since the late 1990s, implementation science has gained widespread acceptance as a field of research. Over time, increased attention has been directed toward developing an evidence base that informs health care delivery and population health. The Clinical and Translational Science Awards (CTSA) Program of the National Institutes of Health (NIH), by definition, focuses on clinical and translational research, including translation of clinical trial results and other research findings into practices and communities (Institute of Medicine [IOM], 2013a). The CTSAs are expected to partner with communities, practices, and clinicians not only in setting strategic directions for research but also in translating findings from research into health and health care.


Findings from clinical trials and effectiveness studies provide evidence that can be summarized and packaged for use in policy and clinical decision making. Examples of resources developed over the past two decades and made available to policy makers, health care organizations, and clinicians include numerous evidence-based clinical practice guidelines and practice recommendations, systematic reviews, and evidence-summary reports.


Despite the availability of evidence-based recommendations for health policy and practice, the 2014 National Healthcare Quality and Disparities Report demonstrated that evidence-based care is delivered only 70% of the time, an improvement of just 4% since 2005 (Agency for Healthcare Research and Quality [AHRQ], 2015). This problem demonstrates the gap between the availability of evidence-based recommendations and application to improve population health. This gap can lead to poor health outcomes such as obesity, poor nutrition, health care–acquired infections, injurious falls, and pressure ulcers (Centers for Disease Control and Prevention, 2016; Conway, Pogorzelska, Larson, & Stone, 2012; Shever, Titler, Mackin, & Kueny, 2010; Sving, Gunningberg, Högman, & Mamhidir, 2012; Titler, 2011).


The terms evidence-based practice (EBP) and implementation science are sometimes used interchangeably, but they are different. EBP is the conscientious and judicious use of current best evidence in conjunction with clinical expertise and patient values to guide health care decisions (Titler, 2014a). Best evidence includes findings from randomized controlled trials, and evidence from other types of science such as descriptive and qualitative research, as well as information from case reports and scientific principles. In contrast, implementation science is a field of research that focuses on testing implementation interventions to improve uptake and use of evidence to improve patient outcomes and population health, and to explicate what implementation strategies work for whom, in what settings, and why (Eccles & Mittman, 2006; Titler, 2010, 2014b). An emerging body of knowledge in implementation science provides an empirical base for guiding the selection of implementation strategies to promote adoption of evidence in real-world settings (Mark, Titler, & Latimer, 2014). Thus, EBP and implementation science, though related, are not interchangeable terms; EBP is the actual application of evidence in practice and policy (the “doing of” EBP), whereas implementation science is the study of implementation interventions, factors, and contextual variables that affect knowledge uptake and use in practices, communities, and policy decision making (Titler, 2014b).


As noted in the National Institute of Nursing Research’s (NINR’s) 2016 strategic plan, the knowledge advanced from implementation science coupled with health care environments that promote the use of evidence-based practices and policies will help close the evidence–practice gap (NINR, 2016). Advancements in implementation science can expedite and sustain the successful integration of evidence in practice and policy to improve care delivery, population health, and health outcomes (Henly et al., 2015).


The purpose of this chapter is to provide a discussion about implementation science and health policy. First, examples are briefly described that illustrate how scientific findings have informed health policy. An overview of barriers to evidence-informed health policy is provided, and strategies to promote uptake and use of evidence by policy makers are described. Challenges in evidence-informed policy development and implementation are identified. Reflections about the future of implementation science and health policy are discussed.


EXAMPLES OF EVIDENCE-INFORMED HEALTH POLICIES


Addressing many health problems in the United States requires research-based knowledge in concert with policies from government and regulatory agencies (Nilsen, Stahl, Roback, & Cairney, 2013). Many of the public health achievements such as seat-belt laws, car seats for infants and children, and workplace regulations to decrease environmental exposures were influenced by public policies (Nilsen et al., 2013). Health policies can be conceptualized as “Big P” policies in the form of laws, rules, and regulations, or “small p” health policies such as management decisions that affect use of research by clinicians or people served within a local setting (Nilsen et al., 2013). The rules and regulations of the Joint Commission for Accreditation of Healthcare Organizations (JCAHO) and the Centers for Medicare and Medicaid Services (CMS) are examples of “Big P” health policies. Examples of “small p” health policies are those adopted by local communities and school systems to reduce childhood obesity.


Centers for Medicare and Medicaid Services


The CMS’s value-based programs (VBPs) reward health care systems with incentive payments for the quality of care they give to people with Medicare coverage. These programs are part of the larger CMS quality strategy to reform how health care is delivered and paid for. VBPs also support the three-part aim of better care for individuals, better health for populations, and lower cost. The VBPs include items such as using incentives to improve care, tying payment to value through new payment models, changing how care is delivered through better coordination across health care settings, and more attention to population health. There are four original VBPs: the Hospital Value-Based Purchasing (HVBP) Program; the Hospital Readmission Reduction (HRR) Program; the Value Modifier (VM) Program (also called the Physician Value-Based Modifier [PVBM]); and the Hospital Acquired Conditions (HAC) Program. The goal of these programs is to link health care performance to payment. For the HVBP, hospitals are paid for inpatient acute care services based on the quality of care, not just quantity of the services they provide. Congress authorized Inpatient Hospital VBP in Section 3001(a) of the Affordable Care Act. The program uses the hospital quality data reporting infrastructure developed for the Hospital Inpatient Quality Reporting (IQR) Program, which was authorized by Section 501(b) of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003. Measures used are based on evidence. For example, the inpatient quality measures for those with heart failure include discharge instructions to include activity, diet, follow-up, how to monitor and address signs and symptoms for worsening heart failure, and weight monitoring. These are important components for individuals with heart failure for managing their chronic illness following hospital discharge. A second example, catheter-associated urinary tract infection (CAUTI), is a quality measure because research demonstrates that the proper insertion and early removal of urinary catheters can reduce CAUTIs. Similarly, the unplanned hospital readmission for those with heart failure is based upon research demonstrating that effective coordination of care can lower the risk of readmission for patients with heart failure. Care coordination, home-based interventions, and exercise-based rehabilitation therapy among patients with heart failure all contribute to reducing the risk of hospitalization (www.cms.gov). These federal health policies are informed by evidence and impact health care delivery through payment incentives.


The Joint Commission for Accreditation of Healthcare Organizations


The JCAHO (now known simply as The Joint Commission) is a regulatory agency that sets standards of care for hospitals and other types of health care agencies. Standards are generally informed by a research base. For example, the recent Sentinel Event Alert about falls in hospitals notes that every year in the United States, hundreds of thousands of patients fall in hospitals, with 30% to 50% of those falls resulting in injury (Bagian et al., 2015). Injured patients require additional treatment and sometimes prolonged hospital stays. The average cost for a fall with injury is about $14,000. Successful strategies to reduce falls include the use of a standardized assessment tool to identify fall and injury risk factors, assessing an individual patient’s risks that may not have been captured through the tool, and interventions tailored to an individual patient’s identified risks. One of the JCAHO standards for reducing falls is in the Provision of Care, Treatment, and Services (PC) section—PC.01.02.08: The hospital assesses and manages the patient’s risk for falls. EP 1: The hospital assesses the patient’s risk for falls based on the patient population and setting. EP 2: The hospital implements interventions to reduce falls based on the patient’s assessed risk.


Health Policies in Childhood Obesity


Examples of “small p” evidence-informed health policies are those adopted by local communities and school systems to reduce childhood obesity. The IOM has published several reports on prevention of childhood obesity that address increased physical activity; improving environments to include more parks, recreational spaces, and safe playgrounds; and types and amounts of foods and beverages provided in schools (IOM, 2009, 2011, 2013b, 2015). Schools are uniquely positioned to support physical activity and healthy eating and can serve as a focal point for obesity prevention among children and adolescents (IOM, 2012). Children spend up to half of their waking hours in school and therefore schools provide the best opportunity for a population-based approach to increase physical activity among the nation’s youth. Similarly, children and adolescents consume one third to one half of their daily calories in schools, and therefore schools have an opportunity to influence their diets. School food–related policies have been shown to affect not only what students consume at school, but also what they and their parents perceive to be healthy choices (IOM, 2007). School-based interventions focused on increasing physical activities among children and adolescents have a positive impact on duration of physical activity, fitness, television viewing, and lifestyle patterns of regular physical activity that carry over to the adult years (IOM, 2012). School districts have enacted a variety of local policies to increase physical activity and create a healthy eating environment, such as requirements for physical education, increasing school-based physical activity outside of physical education, ensuring that school meals comply with the Dietary Guidelines for Americans (e.g., more fruits and vegetables; smaller portions; availability of low-fat and fat-free dairy products), and addressing the type and availability of foods and beverages sold outside the school meal programs (e.g., vending machines; snack bars; IOM, 2012).


In summary, these three examples illustrate the use of evidence in health policy formulation and decision making. Despite these examples, there are a number of barriers to use of research evidence in health policy.


BARRIERS TO EVIDENCE-INFORMED HEALTH POLICY


Studies have demonstrated multiple barriers to use of research evidence in health policy (Brownson, Chriqui, & Stamatakis, 2009; Jacobs et al., 2014; Hardwick, Anderson, & Cooper, 2015; Lavis, Moynihan, Oxman, & Paulsen, 2008; Naude et al., 2015; Tomm-Bonde et al., 2013; Tricco et al., 2016; Zardo & Collie, 2014). Common barriers include (a) beliefs and attitudes; (b) knowledge and skills; (c) relevance; and (d) organizational context.


Beliefs and Attitudes


Beliefs and attitudes include limited quantity, quality, and timeliness of research on topics of importance to policy makers (e.g., economic impact; use of emerging technologies); lack of value of research by the community; factors other than evidence being of higher importance in policy decisions (e.g., cost and equity); and perceived interference with autonomy of policy decision makers (Lavis et al., 2005; Tomm-Bonde et al., 2013; Tricco et al., 2016). Policy makers may disagree with the interpretation of research and systematic reviews and believe that the results are not valid. There may be a mismatch between the type of content in systematic reviews or other evidence sources and the information needs of policy makers. Communities may not value research findings for a variety of reasons, including trust with researchers, not being active participants in formulating and conducting the research, and important demographic characteristics that differ from populations in the original research. Policy makers want evidence that comes from a known, trusted, credible source that is timely and available at key points in the decision-making process. Tricco et al. (2016) found that policy makers who needed evidence in the prior 12 months often commissioned research or research reviews during this period, and then used the evidence to formulate the content of the health policy.


Knowledge and Skills


Knowledge and skills present common barriers to evidence-informed health policies. These barriers include a wide range of factors, such as (a) insufficient time, knowledge, and skills to search for evidence, critique research and systematic reviews, and synthesize the evidence for health policy; and (b) knowing how to interpret and use evidence to develop health policies and implement them. Furthermore, information overload leads to difficulty staying abreast of the research and may contribute to lack of awareness about research or systematic reviews on a particular topic. For people without research experience, it is difficult to evaluate and interpret original research and systematic reviews because the explanation of research methods and statistical analysis is long and complicated, with little attention given to policy implications of study findings. Policy makers report that they have a lack of time to find and discuss the evidence, as answers to policy issues are complex and often require attention within a short time frame (Hardwick et al., 2015; Lavis et al., 2005, 2008; Tricco et al., 2016). To improve use of evidence in health policy, capacity building for evidence-based decision making (EBDM) in health departments and public health agencies is needed (Brownson et al., 2014; Jacobs et al., 2014; Zardo & Collie, 2014).


Relevance


Perceived relevance includes (a) perceived usefulness and applicability of evidence sources, and (b) relevance of research to the role and day-to-day decision making of policy makers (Zardo & Collie, 2014). Systematic reviews are complex and not user-friendly. For example, it is difficult to identify the key messages in systematic reviews, and there is often a lack of detail on how to implement the recommendations (Lavis et al., 2011; Tricco et al., 2016). Public policy makers want a shorter and clearer presentation of the evidence with attention to the benefits, potential harms or risks, and costs, and a one-stop shop that provides high-quality reviews (Tricco et al., 2016). It is also a challenge for policy makers to interpret the applicability of findings for the local context (Lavis et al., 2005; Tricco et al., 2016).


Those who perceive research as relevant to their role are much more likely to use research as compared to those who perceive research of little or no relevance (Zardo & Collie, 2014). Relevance is complex and is influenced by utility; are research findings action-oriented, and do they challenge the status quo? Prior experience with research use is positive and may influence perceptions of relevance. Those who have used research in the past are likely to use it in future policy decision making (Zardo & Collie, 2014).


Research relevance also includes conceptual use of research to inform thinking, discussions, and debate, thereby influencing views on policy issues. This initial conceptual use of research by decision makers can lead to future action-oriented policy development or revisions, when the window of opportunity is created; that is, when the streams of policy, problems, and politics are aligned for new policy development or change (Zardo & Collie, 2014).


Context


Context is the setting or environment in which EBDM occurs, and policies and practices are delivered. Multiple context factors have been shown to influence knowledge use and uptake for application in practice, health care decision making, and policy formulation and implementation (Hardwick et al., 2015; Titler, 2010). A policy climate that is not receptive to research use is not likely to result in evidence-informed health policies.


Absorptive capacity for new knowledge affects research use. Absorptive capacity is the knowledge and skills to enact evidence-informed policies, remembering that the strength of evidence alone will not promote research use. An organization that is able to systematically identify, capture, interpret, share, reframe, and recodify new knowledge and put it to appropriate use will be better able to assimilate research findings. A learning organizational culture and proactive leadership that promotes knowledge sharing are important components of building absorptive capacity for new knowledge (Hardwick et al., 2015; Titler, 2010).


Components of a receptive context for use of research findings include strong leadership, clear strategic vision, good managerial relations, visionary staff in key positions, a climate conducive to experimentation and risk taking, and effective data capture systems. Leadership is critical in encouraging organizational members to break out of the convergent thinking and routines that are the norm in large, well-established organizations (Titler, 2010).


An organization may be generally amenable to innovations but not ready or willing to assimilate the evidence for a specified topic. Elements of system readiness include tension for change; assessment of implications, support and advocacy for research use; dedicated time and resources; and capacity to evaluate the impact of the evidence-informed policy during and following implementation. If there is tension around specific policy issues, and if staff perceive that the situation is intolerable, the evidence is likely to be assimilated if it can successfully address the issues and thereby reduce the tension (Hardwick et al., 2015; Titler, 2010).


STRATEGIES TO PROMOTE RESEARCH USE


Strategies to promote use of research findings and other types of evidence (e.g., local community data) for evidence-informed health policies include (a) capacity building; (b) provision of research findings for use by policy makers; (c) relationship building; and (d) models for knowledge translation in public policy. The research evidence for evidence-informed health policy implementation is not as robust as that available for research use in health care settings (Armstrong et al., 2013).


Capacity Building


Building capacity is one strategy to improve development and implementation of evidence-informed health policies (Brownson et al., 2014; Jacob et al., 2014; Jacobs et al., 2014; Litaker, Ruhe, Weyer, & Stange, 2008; Wilson, Rourke, Lavis, Bacon, & Travers, 2011). Competencies for EBDM have been developed and guide the organization of capacity-building programs (Brownson, Ballew, Kittur, et al., 2009; Jacob et al., 2014). Content common across these programs includes understanding the basic concepts of EBDM; defining the problem or public health issue; locating and critiquing evidence sources; summarizing the scientific literature; prioritizing policy options; doing economic evaluation; developing an action plan and building a logic model; and implementing and evaluating the policy impact. The evidence-based public health training course developed by the Prevention Research Center in St. Louis has demonstrated improvements in participants’ EBDM competencies following course completion (Baker, Brownson, Dreisinger, McIntosh, & Karamehic-Muratovic, 2009; Brownson, Chriqui, et al., 2009; Dreisinger et al., 2008). A quasi-experimental design, using a train-the-trainer approach, demonstrated the effectiveness of the program, with EBDM competencies improving more in the intervention than the control group (Jacobs et al., 2014). This is an excellent example of a capacity-building program designed for policy makers.


Provision of Research for Policy Makers


Provision of research findings for health policy makers is less than optimal. Hardwick et al. (2015) found that the way research is organized and presented does not lend itself to use by policy-making organizations and the way they work across communities, sectors, and settings. Policy makers have mixed views about the helpfulness of recommendations from systematic reviews. Lavis et al. (2005) found policy makers would benefit from systematic reviews that (a) highlight relevant information for decision making, such as contextual factors that affect local applicability, and information about the benefits, harms/risks, and costs of inter-ventions; and (b) are presented in a way that allows rapid scanning for relevance, such as one page of take-home messages, and a three-page executive summary. Researchers could help to ensure that the future flow of systematic reviews will better inform policy making by involving policy makers in their production and better highlighting information that is relevant for decisions (Lavis et al., 2005). To ensure that the global stock of systematic reviews will better inform policy making, producers of systematic reviews should address local adaptation processes, such as developing and making available online more user-friendly “front ends” for systematic reviews.


Investigators should include a section on implications for policy makers in research manuscripts, and develop policy briefs that include impact statements on health and health outcomes. Examples of excellent policy briefs/snapshots are the Health Affairs Health Policy Briefs and those available from Robert Wood Johnson Foundation (RWJF). Health Affairs Health Policy Briefs provide clear, accessible overviews of timely and important health policy topics for policy makers, journalists, and others concerned about improving health care in the United States. The briefs explore competing arguments made on various sides of a policy proposal and point out wherever possible the relevant research behind each perspective. They are reviewed by distinguished Health Affairs authors and other outside experts (www.healthaffairs.org/healthpolicybriefs). An example of a Health Affairs Policy Brief is “The Final 2015-20 Dietary Guidelines for Americans” based on the latest research.


The Health Policy Snapshots from the RWJF provide top takeaway messages, key facts on the public health issue, and recommendations for action (www.rwjf.org). For example, the health policy snapshot on “Schools Can Help Children Eat Healthy and Be Active” is an excellent example of a health policy brief for local policy makers. Every school district and local board of education in the United States has access to this research-based information for development and implementation of policies that address healthy eating and physical activities in schools.


Another approach to address availability of research evidence for policy makers is the development of a “one-stop shop” of research evidence for public health and health systems that are now available (Dobbins et al., 2010; Lavis et al., 2015; Moat & Lavis, 2014; www.healthsystemsevidence.org; www.health-evidence.ca; global.evipnet.org). For example, EVIPNet promotes the systematic use of research evidence in policy making, focusing on low- and middle-income countries, and promotes partnerships at the country level between policy makers, researchers, and civil society in order to facilitate both policy development and policy implementation using the best scientific evidence available (Moat & Lavis, 2014). The Centers for Disease Control and Prevention offers many resources for policy makers, including a guide to community preventive services to help policy makers choose programs and policies to improve health and prevent disease in communities (www.communityguide.org). Systematic reviews are used to answer these questions:


  Which program and policy interventions have been proven effective?


  Are there effective interventions that are right for a specific community?


  What might effective interventions cost; what is the likely return on investment?


Rapid reviews have emerged in response to the incompatibility between information needs of policy makers and the time requirements to complete systematic reviews. The rapid realist review (RRR) method is a knowledge synthesis process producing a product that is useful to policy makers in responding to time-sensitive and/or emerging issues where there is limited time and resources. The focus is on contextually relevant interventions/programs that are likely to be associated with specific outcomes within a specific set of parameters. RRRs include the policy makers in development of specific questions and other key decisions, and content experts to validate findings of the review. The time to perform these reviews has ranged from 2 to 6 months, in contrast to systematic reviews that may take 1 to 2 years. Examples include telehealth contributions to emergency department and discharge operations and evidence-informed public health policy and practice through a complexity lens (Saul, Willis, Bitz, & Best, 2013).


A second strategy used to assist policy makers in use of evidence is the rapid-response program (Mijumbi, Oxman, Panisset, & Sewankambo, 2014; Wilson, Lavis, & Gauvin, 2015). Products provided through such programs may include a listing of relevant research evidence, a brief synthesis of the evidence, and briefings with decision makers. Examples include improving leadership capacity in community care; active living for older adults; and optimal treatment for people with multiple comorbidities (www.mcmasterhealthforum.org).


Relationship Building


Building strong relationships between researchers and policy makers is a strategy to promote research use (Hardwick et al., 2015; Kislov, Harvey, & Walshe, 2011; Lavis et al., 2008; Shearer, Dion, & Lavis, 2014). Building and sustaining these relationships are not without challenges, such as managing tensions and conflicts of interests that can arise from differences in cultures, perceptions of research evidence, and the nature of decision making (Kislov, Harvey, & Walshe, 2011; Lavis et al., 2008).


Use of knowledge brokers or boundary spanners has been shown to be a helpful strategy in public health to facilitate bidirectional communication between researchers and users of evidence to promote mutual understanding of each other’s languages, goals, and culture and to address barriers to the use of scientific knowledge in health policy (Dobbins et al., 2009). Knowledge brokers can be individuals, groups, and/or organizations, and in each case the knowledge broker focuses on promoting the integration of the best available evidence into policy and health care decision making. Knowledge brokers can play several roles, including: knowledge management, offering users valid information tailored to their settings and needs; liaison, facilitating direct contacts and collaboration between producers and users of scientific knowledge; and assessing user expectations and adjusting activities to better fit their work flow (Dagenais, Laurendeau, & Briand-Lamarche, 2015; Dobbins et al., 2009). The interpersonal dimension of knowledge brokers (interaction with the users) is a key factor in promoting knowledge use. Direct and frequent contacts (including face to face) between the broker and the intended users help build “relationship capital,” trust, and understanding. It appears that settings with a weaker research culture (e.g., no academic connections; sporadic contacts with researchers) will need more intensive broker interaction to reduce the perceived “semantic distance” from the scientific knowledge and its use in policy development (Dagenais, Laurendeau, & Briand-Lamarche, 2015; Dobbins et al., 2010).


Last, understanding the social networks of policy decision makers and using those networks to spread evidence is another relationship strategy to consider (Shearer et al., 2014; Yousefi-Nooraie, Dobbins, Marin, Hanneman, & Lohfeld, 2015). Social network theory involves understanding the interpersonal exchange of information among members of a group. For example, engaging with policy makers who are centrally located (centrality) in a social network holds great promise for influencing others more peripheral in the network. Centrality in social networks should not be confused with those in organizational leadership positions (organizational structure), as their spheres of influence may differ. Shearer et al. (2014) demonstrated that network position—that is, connectedness to others—predicted use of knowledge by policy makers more than any other individual characteristic such as job position/level, organizational affiliation, or experience as a researcher. Strategies for promoting use of evidence with policy makers should maximize deliberative dialogues with policy makers who are in strategic network positions to influence others, not just in strategic organizational positions.


Models


Various conceptual models of EBP and translation science are available. Hendriks et al. (2013) note several limitations of these models for use in public policy, including lack of a comprehensive approach; the fact that most are based on research in organizational settings; and the fact that many fail to take into account the factors that make policy development for complex public health problems difficult (e.g., childhood obesity). The conceptual model for policy integration, the Behavior Change Ball (Figure 3.1), adapted from Michie, van Stralen, and West’s (2011) Behavior Change Wheel, addresses several important components of integrating knowledge into local policy development, including: 10 organizational behaviors (e.g., agenda setting; leadership); 3 categories of interrelated determinants of behavior (capability, opportunity, motivation [COM]); 9 strategies/interventions to improve suboptimal determinants of behavior (COM) such as education, persuasion, incentivization, and modeling; and 7 factors that enable these 9 interventions, such as legislation, communication/marketing, and environment or social planning. The model is meant to illuminate the dynamic policy process. The authors of the model recommend application to case study designs or narrative inquiries to build research on policy development and implementation.



FIGURE 3.1     Behavior Change Ball.


Source: Hendriks et al. (2013).


Armstrong et al. (2013) have developed a logic model for development and implementation of evidence-informed policies in local governments (Figure 3.2). It consists of three core components: tailored organizational support; group training; and targeted messages and evidence summaries. This logic model was developed from implementation science interventions; knowledge management and diffusion of innovation theories; and a qualitative study of evidence-informed decision making by local governments. The model is designed to address the lack of rigorous evidence to guide knowledge transfer in a public health decision-making context. This model holds great promise for designing knowledge translation interventions for local governments (Armstrong et al., 2013).


Scientists and experts in public health are recommending use of simulation models (based on systems science and nonlinear dynamics) to investigate and understand the nature of complex public health problems (Atkinson, Page, Wells, Milat, & Wilson, 2015; IOM, 2015). Simulation modeling allows virtual experimentation of policy scenarios to test comparative impact and cost of various programs, interventions, and policies over various time frames. Simulation modeling is informed by reviews of research evidence, existing conceptual models, and expert opinions, and is able to incorporate the impact of contextual influences on policy making. Models are considered to be a theoretical representation of the problem and must undergo a validation process that includes how accurately the model can reproduce real-world historical data patterns (Anderson & Titler, 2014; Atkinson, Page, et al., 2015; IOM, 2015). In a systematic review to determine the effectiveness of system dynamics modeling for health policy and the nature of its application, only six papers were found comprising eight case studies. No analytic studies were found that examined the effectiveness of this type of modeling. The paucity of relevant papers indicates that, although the volume of descriptive literature advocating the value of system dynamics modeling is considerable, its practical application to inform health policy making is yet to be routinely applied and rigorously evaluated (Atkinson, Wells, et al., 2015). A recent report by the IOM (2015) discusses the use of agent-based models (a type of simulation model) to assess the effects of tobacco control policies. Although simulation models hold great promise, documenting and evaluating their applications will be vital to supporting uptake by policy makers (Atkinson, Wells, et al., 2015).



FIGURE 3.2     Knowledge translation model for local governments. EIDM, evidence-informed decision making; LG, local government; PH, public health.


Source: Armstrong et al. (2013).


CHALLENGES


There are several challenges in implementation science. The first is that this field of inquiry has been referred to by numerous related, although not synonymous terms, including translational science, effectiveness science, dissemination science, implementation science, and knowledge translation (McKibbon et al., 2010; Newhouse, Bobay, Dykes, Stevens, & Titler, 2013; Table 3.1). Despite varying terminology, there is international agreement regarding the overall goal: to address the challenges associated with the use of research findings and EBP recommendations in health policy, care delivery, and decision making. This collective objective has led to the formation of academic journals like Implementation Science and Translational Behavioral Medicine, which are specifically interested in advancing the body of science in this field. However, the inconsistent terminology impedes theoretical formulation and scientific progress (Proctor, Powell, & McMillen, 2013; Smits & Denis, 2014). Differences in terminology are further complicated by the lack of mature taxonomies of implementation interventions (Lokker, McKibbon, Colquhoun, & Hempel, 2015; Mazza et al., 2013). Although standards are emerging for reporting implementation interventions in scientific journals, there is yet no agreement about these standards and the detail required about the implementation intervention included in published reports (Albrecht, Archibald, Arseneau, & Scott, 2013; Eccles, Foy, Sales, Wensing, & Mittman, 2012; Michie, Fixsen, Grimshaw, & Eccles, 2009; Mohler, Kopke, & Meyer, 2015; Pinnock et al., 2015; Proctor et al., 2013; Riley et al., 2008). Furthermore, fidelity to the implementation intervention is not always reported (Slaughter, Hill, & Snelgrove-Clarke, 2015). These challenges make it difficult to compare and contrast the effectiveness of implementation interventions across studies and their impact on health.


TABLE 3.1     Related Terms Used in Implementation Science





Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 21, 2018 | Posted by in NURSING | Comments Off on Implementation Science

Full access? Get Clinical Tree

Get Clinical Tree app for offline access