Summaries of intervention-based research are also valuable in determining how strong an intervention might be by gauging its effectiveness among multiple studies. For example, it is vitally important to understand what health promotion interventions work best and under what conditions. To address this issue, Conn, Hafdahl, and Mehr (2011) conducted a meta-analysis of studies focused on physical activity interventions in adults. They found that behavioral approaches were more effective than cognitive ones in improving adults’ activity outcomes. Face-to-face delivery of the intervention, instead of telephone or mail, and individualized-focused interventions, rather than targeting of communities, were more effective in promoting physical activities. Systematic review, meta-analysis, meta-synthesis, and mixed-methods systematic review techniques in the evaluation of interventions across multiple studies provide much stronger evidence for implementing them in practice than one isolated investigation (see Chapter 19 for conducting these research syntheses). The term intervention-based research encompasses a broad range of investigations that examines the effects of any intervention or treatment having relevance to nursing. Nursing interventions are defined as “deliberative cognitive, physical, or verbal activities performed with, or on behalf of, individuals and their families [that] are directed toward accomplishing particular therapeutic objectives relative to individuals’ health and well-being” (Grobe, 1996, p. 50). An expansion of this definition includes nursing interventions that are performed with, or on behalf of, communities. Sidani and Braden (1998, p. 8) view interventions as “treatments, therapies, procedures, or actions implemented by health professionals to and with clients, in a particular situation, to move the clients’ condition toward desired health outcomes that are beneficial to the clients.” Nursing interventions are nurse-initiated and based on nursing classifications of interventions unique to clinical problems or issues addressed by nurses (Bulechek, Butcher, & Dochterman, 2008; Forbes, 2009). Historically, nursing interventions have tended to be viewed as discrete actions, such as “positioning a limb with pillows,” “raising the head of the bed 30 degrees,” and “assessing a patient’s pain.” Interventions can be described more broadly as all of the actions required to address a particular nursing problem or issue. But there is little agreement regarding the conceptualization of interventions and how these discrete actions fit together (McCloskey & Bulechek, 2000). Frameworks have been proposed in moving from isolated action-oriented interventions to more integrative approaches to care. For example, dance therapy might be used as part of a falls reduction initiative (Krampe et al., 2010). In addition, care bundles, which are combinations of interrelated nursing actions, might be the basis for comprehensive approaches to care (Deacon & Fairhurst, 2008; Quigley et al., 2009). These variations in interventions are discussed in the following section. • The Nursing Interventions Classification (NIC), containing 542 direct and indirect care interventions, each of which has an assigned numerical code and that are categorized in seven domains including a new community domain (McCloskey & Bulechek, 2000; Bulechek et al., 2008). The Center for Nursing Classification & Clinical Effectiveness is housed at the University of Iowa College of Nursing and can be accessed online (www.nursing.uiowa.edu/excellence/nursing_knowledge/clinical_effectiveness/nic.htm/). • NANDA International (NANDA-I; formerly North American Nursing Diagnosis Association) compiles nursing diagnoses and classifications (www.nanda.org/Home.aspx/). NANDA-I publishes the International Journal of Nursing Terminologies and Classifications (IJNTC) quarterly, which is distributed internationally. • Home Health Care Classification System (HHCC) has two taxonomies with 20 Care Components used as a standardized framework to code, index, and classify home health nursing practice (Saba, 2002). • The Omaha System provides a list of diagnoses and an Intervention Scheme designed to describe and communicate multidisciplinary practice intended to prevent illness, improve or restore health, decrease deterioration, and/or provide comfort before death. There are 75 targets or objects of action (Martin, 2005). Martin and Bowles (2008) address the link between practice and research that is predicated on the Omaha System. • The Nursing Intervention Lexicon and Taxonomy (NILT) (Grobe, 1996). Grobe (1996, p. 50) suggested that “theoretically, a validated taxonomy that describes and categorizes nursing interventions can represent the essence of nursing knowledge about care phenomena and their relationship to one another and to the overall concept of care.” Although taxonomies may contain brief definitions of interventions, they do not provide sufficient detail to allow one to implement an intervention. The actions identified in taxonomies may be too discrete for testing and may not be linked to the resolution of a particular patient problem. Populations and settings for which nursing interventions are intended are not elucidated through nursing intervention taxonomies, making them somewhat ambiguous for guiding studies (Sidani & Braden, 1998; Forbes, 2009). It is becoming increasingly clear that the design and testing of a nursing intervention require an extensive program of research rather than a single well-designed study (Forbes, 2009; Sidani & Braden, 1998). As the discipline of nursing advances its mission to accumulate a strong practice science, it is apparent that a larger portion of nursing studies must focus on designing and testing interventions. Moreover, intervention research is central to the role of nurses in practice, education, and leadership. Vallerand, Musto, and Polomano (2011), in an extensive review of nurses’ roles in pain management, depict intervention-based research as an integral component of advocacy, quality care, and innovation in patient care (see Figure 14-1). Equally important in advancing nursing science are replication studies, which mimic the design and procedures of interventions tested in previous research. Despite the recognized importance of replicating intervention studies, few have been repeated to validate and verify their results. A theory or model can be used to guide the development of an intervention as well as provide direction in the design of the study and testing procedures. The theory itself should contain conceptual definitions, propositions linked to hypotheses, and any empirical generalizations available from previous studies (Rothman & Thomas, 1994; Sidani & Braden, 1998). Theoretical constructs serve as frameworks for many nursing intervention studies, for example, risk reductions strategies for cardiovascular disease (Gholizadeh, Davidson, Salamonson, & Worrall-Carter, 2010), improving quality of life with pressure ulcers (Gorecki et al., 2010), and goal-directed therapies for rehabilitative care (Scobbie, Wyke, & Dixon, 2009). Kolanowski, Litaker, Buettner, Moeller, and Costa (2011) derived their activity interventions for nursing home residents with dementia from the Need-Driven Dementia–Compromised Behavior Model for responding to behavioral symptoms. Their randomized double-blind controlled trial involving cognitively impaired residents also linked theory-based underpinnings to the behavioral outcome measures for the study. Deciding on the best theory to guide intervention research requires: (1) a thorough and thoughtful review and synthesis of the literature (see Chapters 6 and 19); (2) scholarly papers that discuss appropriate theory-based interventions in areas of research interest; (3) interactive discussions among faculty, students, and other nurses about options for an optimal theory to guide interventions; and (4) once a theory has been selected, conversations with the project team to determine its application to the study. An intervention theory must include a careful description of the problem the intervention will address, the intermediate actions that must be implemented to address the problem, moderator variables that might change the impact of the intervention, mediator variables that might alter the effect of the intervention, and expected outcomes of the intervention. Box 14-2 lists the elements of an intervention theory that are applied to research processes. Models of theories or frameworks help explain the relationships of concepts, interventions, and outcomes. When critically appraising intervention research, students need to keep in mind that a major threat to construct study validity arises if a framework or model used to guide a study and its intervention has no clear link to the development and implementation of the intervention and the interpretation of study findings. Evidence as to how the framework has guided the study needs to be threaded throughout the research report (see Chapter 7 for understanding the inclusion of frameworks in studies). An example of an intervention with minimal scientific rationale is evident in the early work of Schmelzer and Wright (1993). These gastroenterology nurses began a series of studies that examined the procedures for administering an enema. At that time, they found no research in the nursing or medical literature that tested the effectiveness of various enema procedures. Without scientific evidence to justify the use of various procedures for administering enemas—such as the amount of solution, temperature of solution, speed of administration, content of the solution (soap suds, normal saline, or water), positioning of the patient, or measurement of expected outcomes or possible complications—they were faced with relying on the tradition of practice and clinical experience. Their first study involved telephone interviews with nurses across the country in an effort to identify patterns in the methods used to administer enemas; however, this study was unsuccessful in helping Schmelzer and Wright (1996) validate any commonly used techniques to establish guidelines for enema interventions. In their next study, these researchers developed their own protocol for enemas and pilot-tested it on hospitalized patients awaiting liver transplantation. In their subsequent study with a sample of liver transplant patients, these researchers tested for differences in the effects of various enema solutions (Schmelzer, Case, Chappell, & Wright, 2000). Schmelzer (1999-2001) then conducted a study funded by the National Institute for Nursing Research to compare the effects of three enema solutions on the bowel mucosa. Healthy subjects were paid $100 for each of three enemas, after which a small biopsy specimen was collected. These researchers’ experiences illustrate how the lack of scientific evidence for an intervention requires a series of studies before a large-scale study can be launched to answer important research questions. TABLE 14-1 Comparison of Characteristics for Efficacy vs. Effectiveness Studies Adapted from Piantadosi, S. (2005). Clinical trials: A methodologic perspective (2nd ed) (p. 323). Hoboken, NJ: John Wiley & Sons; and Gartlehner, G., Hansen, R. A., Nissman, D., Lohr, K. N., & Carey, T. S. (2006). Criteria for distinguishing effectiveness from efficacy trials in systematic reviews. Technical Review, Agency for Healthcare Research and Quality. (AHRQ Publication No. 06-0046.) Rockville, MD: U.S. Department of Health and Human Services. The treatment effect size (ES) refers to the magnitude of effect produced by the intervention. Cohen (1988) and Aberson (2010) provide parameters for qualifying and quantifying the ES but caution that there are inherent risks in applying ES parameters across diverse fields of study because interpretations of ESs can vary. Cohen (1988) identifies a small ES as 0.2, a medium ES as 0.5, and a large ES as 0.8. A small ES of 0.20 for an intervention means a 20% difference can be expected between the intervention and comparison groups. Likewise, a 0.5 ES represents a 50% difference attributed to the intervention. Knowledge of the ES for any intervention is helpful and often necessary for calculations of the sample size needed to provide sufficient statistical power to detect a treatment difference. The nature of the ES also varies from one statistical procedure to the next; it could be the difference in cure rates, a standardized mean difference, or a correlation coefficient. However, the ES function in conducting a power analysis is the same in all procedures (Aberson, 2010; see Chapter 15 for a more detailed discussion of ES and power analysis). A placebo is an intervention intended to have no effect. However, a placebo generally looks, tastes, smells, and/or feels like the test intervention or is experienced like the real study intervention. The purpose of a placebo is to account for how study participants would respond without actually receiving the active intervention. Sham interventions are often used with procedures, and are a variation of a “fake” intervention that omits the essential therapeutic element of the intervention. A “sham” intervention also attempts to control for the placebo effect. Rates for placebo responses do differ depending on the types of interventions and populations studied. For example, the rate for a placebo response with symptom management research can be as high as 90% (Kwekkeboom, 1997). Very complex psychobiological responses occur in the brain, called the real placebo effect, even when an intervention is perceived to be inert or of no known therapeutic value (Benedetti, Carlino, & Pollo, 2011). It is important to note that designs using placebo or sham interventions are typically carefully evaluated by institutional review boards (IRBs) to ensure that patients’ rights are not violated. Treatment fidelity has to do with the accuracy, consistency, and thoroughness of how an intervention is delivered according to the specified protocol, treatment program, or intervention model. Strict adherence to treatment specifications must be evaluated on an ongoing basis during the course of a study. Thus, intervention fidelity is “the adherent and competent delivery of an intervention by the interventionist as set forth in the research plan” (Santacroce, Maccarelli, & Grey, 2004, p. 63), and this fidelity is of utmost importance to the inference of the study’s internal validity in intervention-based research. Stringent controls for implementation of study procedures are critical to the study’s integrity. Methodological approaches to treatment fidelity include education and training of all persons implementing the treatment, periodic monitoring or surveillance of the implementation of the treatment or fidelity checks (either conspicuous or inconspicuous observation), and retraining and reevaluation of study research staff if deviations from the prescribed protocol for study procedures are found. Gearing et al. (2011) propose a scientific guide to treatment fidelity and discuss implications at all phases of the treatment, including: (1) implementing the design; (2) training the research staff; and (3) monitoring the delivery and receipt of the intervention. Best practices in constructing treatment fidelity criteria and executing sound methodological procedures have been summarized for behavioral research (Bellg et al., 2004; Borrelli et al., 2005). According to Bellg et al. (2004), specific goals should direct fidelity checks. The goals include: (1) ensuring same treatment dose within conditions; (2) ensuring equivalent dose across conditions; and (3) planning for implementation setbacks. Investigators should make provisions to accomplish these goals while formulating criteria by which fidelity or adherence to treatment protocols can be assessed. Often a sampling parameter is set prior to initiation of the study that a certain percentage of study intervention episodes will be evaluated. Manipulation checks are also a critical part of ensuring the integrity of study procedures. These types of checks are valuable in gathering information about circumstances and study conditions that might interfere with or impede implementation of the study intervention. Box 14-3 shows an example of a manipulation check used by interventionists in a study of activity interventions for nursing home residents with dementia (Kolanowski et al., 2011). This type of checklist is also useful to investigators in determining intervention frequency, dose intensity, and protocol deviations. This information not only is critical in recording adherence to the study intervention but also might be included in the analyses of data to separate out or account for variations in exposures to the intervention.
Intervention-Based Research
evolve.elsevier.com/Grove/practice/
Intervention-Based Research Conducted by Nurses
Nursing Interventions
Nursing Intervention Taxonomies
Programs of Nursing Intervention Research
Note: RCT, randomized controlled trial. Adapted from Vallerand, A. H., Musto, S., & Polomano, R. C. (2011). Nursing’s role in cancer pain management. Current Pain and Headache Reports, 15(4), 250–562; Vallerand, A. H., Musto, S., & Polomano, R. C. (2011). Nursing’s role in cancer pain management. Current Pain and Headache Reports, 15(4), 250–562.
Theory-based Interventions
Scientific Rationale for Interventions
Terminology for Intervention-Based Research
Efficacy versus Effectiveness
Criteria
Efficacy
Effectiveness
Purpose
Test a question
Assess effectiveness
Sample size
Smallest adequate sample size
Large sample size
Study cohort
Homogeneous
Heterogeneous
Population
Study population in tertiary care setting
Study population in initial care setting
Eligibility criteria
Stringent or strict
Minimally restrictive or relaxed
Outcomes
Single or minimal outcomes
Multiple outcomes
Duration
Short study duration
Long study duration
Adverse event recording
Minimal
Comprehensive
Focus of inference
Internal validity
External validity
Analysis
Completer-only analysis
Intent-to-treat analysis
Treatment Effect Size
Placebo and Sham Interventions
Treatment Fidelity
Intervention-Based Research
