PROFESSIONAL ISSUES—QUALITY, SAFETY, WORK ENVIRONMENT, AND WELLNESS
Brandis Thornton and Jaime Manley
The foundation of nursing is rooted in the “protection, promotion, and restoration of health and well-being; the prevention of illness and injury; and the alleviation of suffering” (American Nurses Association, 2015). Nurses are the frontline health professionals who provide safe, effective, and timely care in sometimes under resourced and chaotic settings, such as the ICU. As such, nurses are obligated to participate in establishing and sustaining a culture that values and promotes quality and safety for staff and patients. The purpose of this chapter is to provide critical care nursing with an introduction to the history of the patient safety movement (including how patient safety techniques evolved out of knowledge from other industries outside of healthcare), strategies to promote a culture of patient safety, and how to implement quality-improvement (QI) methodologies in clinical care. In addition, this chapter addresses the issue of a healthy work environment (HWE) because meeting the needs of nursing staff is paramount to ensuring safe and effective patient care.
KEY STAKEHOLDERS IN THE HISTORY OF THE PATIENT SAFETY MOVEMENT
Patient safety is a relatively “young” field, with most work being done in the past 20 to 25 years (Figure 10.1).
A. Institute of Medicine
In 2000, the Institute of Medicine (IOM) published To Err Is Human: Building a Safer Health System (Kohn, Corrigan, & Donaldson, 2000). This report estimated that between 44,000 and 98,000 patients annually are harmed by medical errors. As such, its authors called for a ≥50% reduction in medical errors over a 5-year time period by:
1. Establishing a national focus on patient safety
2. Identifying and learning from errors via mandatory and voluntary reporting systems (VRS)
3. Raising standards and expectations of healthcare organizations by professional groups, payers, and oversight organizations
4. Helping healthcare organizations implement systems and culture change to improve patient safety (IOM, 2000)
B. The Joint Commission
1. The National Patient Safety Goals® (NPSG) program was established by the Joint Commission (TJC) in 2002 and represents the highest priorities for patient safety across the nation. NPSG are used as part of TJC’s accreditation process for healthcare organizations. Table 10.1 outlines 2017’s NPSG for hospitals; each healthcare setting (hospitals, long-term care, ambulatory, laboratory services, etc.) has its own set of goals. TJC also issues Elements of Performance, which further provide guidance for of standards related to each NPSG (TJC, 2018).
2. In 2008, the Joint Commission Center for Transforming Healthcare was created to assist providers to transform patient care into a higher quality healthcare delivery system “by developing highly effective, durable solutions to healthcare’s most critical safety and quality problems” (Joint Commission Center for Transforming Healthcare, n.d., “Mission Statement”). The Center works collaboratively with healthcare organizations, providing support in dissemination and adoption of best practices (Joint Commission Center for Transforming Healthcare, n.d.).
C. Agency on Healthcare Research and Quality
The Agency on Healthcare Research and Quality (AHRQ) was created in 1989 as a part of the U.S. Department of Health and Human Services and was originally known as the Agency for Health Care Policy and Research. It was founded with the goal of making “health care safer, higher quality, more accessible, equitable, and affordable” (AHRQ, n.d.-a, “AHRQ Profile”) by working with the U.S. government and other partners. AHRQ contributes the following to the field:
Goal 1: Improve the accuracy of patient identification.
Use at least two patient identifiers when providing care, treatment, and services.
Eliminate transfusion errors related to patient misidentification.
Goal 2: Improve the effectiveness of communication among caregivers.
Report critical results of tests and diagnostic procedures on a timely basis.
Goal 3: Improve the safety of using medications.
Label all medications, medication containers, and other solutions on and off the sterile field in perioperative and other procedural settings.
Reduce the likelihood of patient harm associated with the use of anticoagulant therapy.
Maintain and communicate accurate patient medication information.
Goal 6: Reduce the harm associated with clinical alarm systems.
Improve the safety of clinical alarm systems.
Goal 7: Reduce the risk of health care-associated infections.
Comply with either current Centers for Disease Control and Prevention hand-hygiene guidelines or the current World Health Organization hand-hygiene guidelines.
Implement evidence-based practices to prevent healthcare-associated infections due to multidrug-resistant organisms in acute care hospitals.
Implement evidence-based practices to prevent central line-associated bloodstream infections.
Implement evidence-based practices for preventing surgical site infections.
Implement evidence-based practices to prevent indwelling catheter-associated urinary tract infections.
Goal 15: The hospital identifies safety risks inherent in its patient population.
Identify patients at risk for suicide.
Universal Protocol: Prevent mistakes in surgery.
Conduct a preprocedure verification process.
Mark the procedure site.
A time-out is performed before the procedure.
Source: From The Joint Commission. (2018). 2018 National Patient Safety Goals®. Retrieved from www.jointcommission.org/standards_information/npsgs.aspx
1. Publication of AHRQ Quality Indicators, which are measures that hospitals and healthcare organizations can use to benchmark themselves against other organizations in a variety of care contexts, including prevention, inpatient care, patient safety, and pediatric care
2. Development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS), a patient and family experience survey tool
3. Regulation of patient safety organizations (PSOs), which are entities that “have expertise in identifying the causes of, and interventions to reduce the risk of, threats to the quality and safety of patient care” (AHRQ, n.d.-b). Healthcare organizations contract with PSOs to engage in patient safety activities, including development of protocols and recommendations, encouraging a culture of safety, and the collection and analysis of patient safety work product. By working with PSOs, organizations are protected by the a Patient Safety Act from legal liability for issues that are uncovered as part of patient safety activities.
D. National Patient Safety Foundation
The National Patient Safety Foundation (NPSF) was formed to promote “patient safety by using a systems approach to analyze human or organizational errors that may lead to patient injuries and thereby discover and eliminate their root cause” (Goldsmith, 1997, p. 1561). NPSF’s contributions include:
1. Sponsorship of the annual Patient Safety Awareness Week
2. Creation of the Lucian Leape Institute, a strategic think tank
3. Establishment of the American Society of Professionals in Patient Safety (ASPPS), which provides educational resources and certification examinations in the area of patient safety (National Patient Safety Foundation, n.d.)
E. Institute for Safe Medication Practices
The Institute for Safe Medication Practices (ISMP) began medication safety efforts in 1975 and is the “nation’s only … nonprofit organization devoted entirely to medication error prevention and safe medication use” (ISMP, n.d.). ISMP’s contributions include:
1. The Medication Errors Reporting Program (MERP), which is a voluntary medication error reporting program that allows identification of trends in errors and sharing of lessons learned
2. Medication safety alert newsletters for healthcare professionals and consumers
3. Production of patient and healthcare provider education materials (ISMP, n.d.)
886F. Institute for Healthcare Improvement
The Institute for Healthcare Improvement (IHI) was founded in 1991 with the mission to “improve health and health care worldwide.” IHI utilizes “the science of improvement” to drive change in healthcare systems; this science draws on principles from multiple disciplines, including systems theory, psychology, statistics, human factors, and clinical science. IHI’s commitment to improvement is evidenced in its multiple contributions to the fields of quality and safety:
1. The Breakthrough Series, which is IHI’s model to achieve rapid and dramatic change by forming learning collaboratives involves multiple healthcare organizations that can learn best practices from each other (Institute for Healthcare Improvement, 2003)
2. Open School, which offers web-based training on quality and safety topics for healthcare professionals and trainees (courses are free for IHI member institutions, and nurses can receive continuing education credit)
3. Working to achieve the Triple Aim, which improves the experience of individual patients, improves the health of the population, and reduces the cost of healthcare per capita (Berwick, Nolan, & Whittington, 2008; IHI, n.d.-b)
Safety Culture Element
Staff are encouraged to respectfully question each other (including those in positions of authority) and speak up when there is a safety concern.
After an event is reported, staff receive communication on process and policy changes in response to the error, and how to prevent it from happening again.
Staff feel that mistakes are not held against them, and that there is a system approach to patient safety.
The organization strives to continuously improve and learn from event reports.
There is adequate staffing to prevent errors and safety events.
Leadership at the unit and hospital levels promotes a culture of safety by prioritizing safe work, providing adequate resources, and supporting staff in managing issues.
Staff within and between units support one another, cooperate and coordinate, and treat each other with respect.
Thorough and adequate handoffs are a part of every patient transfer, shift change, and provider sign-out.
Sources: Adams-Pizarro, I., Walker, Z., Robinson, J., Kelly, S., & Toth, M. (2008). Using the AHRQ Hospital Survey on Patient Safety Culture as an intervention tool for regional clinical improvement collaboratives. In K. Henriksen, J. B. Battles, M. A. Keyes, & M. L. Grady (Eds.), Advances in patient safety: New directions and alternative approaches (Vol. 2: Culture and Redesign). Rockville, MD: Agency for Healthcare Research and Quality. Retrieved from www.ncbi.nlm.nih.gov/books/NBK43728; Burlison, J. D., Quillivan, R. R., Kath, L. M., Zhou, Y., Courtney, S. C., Cheng, C., & Hoffman, J. M. (2016). A multilevel analysis of U.S. hospital patient safety culture relationships with perceptions of voluntary event reporting. Journal of Patient Safety. doi:10.1097/PTS.0000000000000336
CREATING A CULTURE OF SAFETY IN HEALTHCARE
Singer, Lin, Falwell, Gaba, and Baker (2009) defined the safety culture of an organization as “the values shared among organization members about what is important, their beliefs about how things operate in the organization, and the interaction of these with work unit and organizational structures and systems, which together produce behavioral norms in the organization that promote safety” (p. 400). A strong safety culture has been associated with desired outcomes, including improved patient and family satisfaction and reduced patient mortality, readmissions, medication errors, and hospital-acquired pressure ulcers (PUs; DiCuccio, 2015). Table 10.2 outlines dimensions of a culture of safety in the healthcare setting. The first part of the unfolding Case Study 10.1 illustrates key points of a safety culture.
A. Person Approach Versus System Approach
Historically, “safety” in healthcare has been the responsibility of the individual practitioner, that is, with enough training and motivation, errors will not happen (Chera et al., 2016). This is known as the “person approach” to error management (Reason, 2000). However, it is unrealistic to expect humans functioning in high-risk environments like healthcare settings to do so error-free. Organizations that have adopted a culture of safety have done so by moving beyond individual blame and instead focusing on system failures (Chera et al., 2016). The “system approach” to error management is rooted in the belief that errors will happen when humans are involved, and “though we cannot change the human condition, we can change the conditions under which humans work” (Reason, 2000). The second part of Case Study 10.1 outlines some system issues that contributed to the harm event.
887CASE STUDY 10.1 Patient Safety Introduction
Joe is a nurse orientee in a pediatric cardiac intensive care unit and is caring for a 13-year-old male in heart failure. The patient is on a ventricular assist device, and requires multiple continuous medications, including analgesia. Today is a particularly busy day, because the patient had a bronchoscopy this morning, has required multiple ventilation setting adjustments, and is being switched to a bed that will allow him to lie prone. The patient is receiving significant pain control and due to the syringe concentration is requiring syringe and line replacement every 2 hours. Prior to sending up a new syringe of hydromorphone (0.2 mg/mL) when the old one runs out, the clinical pharmacist contacts the bedside nurse (Joe’s preceptor, Liz) to ask whether the medication can be switched to a higher concentration (4 mg/mL) to allow for less frequent changes. The bedside nurse agrees that the change would be appropriate, and the pharmacist collaborates with the ordering practitioner to change the order in the electronic medical record. Around 18:00, the patient begins to decompensate, and the attending physician and multiple other staff enter the room to assist with patient management and transition to the new bed. Meanwhile, the old hydromorphone syringe runs out, and Joe goes down to pharmacy to pick up the new hydromorphone. While Liz and the physician are managing the patient, Joe and a second nurse double-check the new hydromorphone and verify that the syringe matches the concentration ordered in the electronic medical record (4 mg/mL). However, they are unable to reach the medication pump to hook up the new syringe because of space constraints in the room (multiple staff and a large rotating bed). Instead, they leave the medication at the bedside for the night shift nurse, Amy, to hang. Liz provides shift change handoff to Amy, but they aren’t able to do a typical double check of infusing medications and lines due to the patient status. When the patient is stabilized around 19:30, Amy hangs the syringe and restarts the pump at the previously programmed rate of 8.25 mL/hr. Amy doesn’t notice the “Note Dosage Strength” sticker on the syringe. Around 20:50, Amy performs her medication double checks with a second nurse, and they realize that the hydromorphone is being infused at a higher rate than intended.
B. Promoting a Just Culture
Although system failures often contribute to adverse events, caregivers must be held accountable for individual failures, when appropriate. A just culture is one that considers system failures as root causes for errors, yet also recognizes the need for individual accountability (Boysen, 2013; Petschonek et al., 2013). It is a balance between blamelessness and punitive action, and although this seems straightforward, it is often a fine line (Brink, 2017). Multiple approaches (Frankel, Leonard, & Denham, 2006; Marx, 2007) exist for application of just-culture principles in the wake of an adverse event or near miss. Figure 10.2 presents some key questions and considerations that can be useful in determining event response, and the third part of Case Study 10.1 shows how just-culture principles can be used in the case of patient harm.
CASE STUDY 10.1 System Issues Contributing to Harm Event (Part 2)
In this scenario, there are multiple system issues that contributed to this adverse drug event:
• Multiple hydromorphone concentrations are available.
• The patient environment was chaotic due to multiple staff present and patient deterioration.
• A new, unfamiliar bed type was being utilized, and staff required just-in-time training for use, adding to the chaotic nature of the day.
• The “Note Dosage Strength” sticker on the syringe was inadequate to alert the nurse that a concentration change had occurred.
• Barcode scanning and other technology was inadequate to prevent the new hydromorphone from being infused at the rate intended for the old concentration.
• The intensive care unit culture allowed for nurses to skip or delay bedside handoff and double check of high-risk infusions.
889CASE STUDY 10.1 Application of “Just-Culture” Principles (Part 3)
This scenario lends itself to application of just culture principles. Although there were multiple system failures that contributed to this event, nursing staff, particularly trainees and orientees, can be coached to improve safety practices in the future. None of the nurses involved were acting maliciously or recklessly or were under the influence of a substance. However, Joe’s manager can help him learn to assert himself by asking staff to move so he can reach the head of the bed and perform a thorough double check of the medication at the pump, and not just at the computer.
C. Understanding the Swiss Cheese Model
Indeed, the root cause of many errors in healthcare can be attributed to inadequate system design and functioning. Furthermore, it is often several system failures that occur (or system failures in conjunction with individual failures), allowing an adverse event to happen. This is known as the “Swiss cheese model.” Each line of defense in the healthcare system (training, policies/procedures, alarms, physical barriers, etc.) is akin to a slice of cheese in a stack. Holes in one slice are inconsequential because the other slices prevent the adverse event from reaching the patient. However, on occasion, the holes of the slices align such that an error finds a path through the stack of slices and affects a patient (Reason, 2000).
1. Latent Errors. These are system issues that lie dormant until they are combined with an active, individual error to cause an adverse event. Examples include inadequate policies and procedures, suboptimal staffing, an unsafe work environment, or a lack of safety culture in the organization (S. J. Collins, Newhouse, Porter, & Talsma, 2014).
2. Active Errors. There are individual failures that, when combined with latent errors, have the potential to produce harm to a patient. Factors that contribute to active errors include fatigue, stress, inexperience, inadequate training, distractions, and disregard for policies and procedures (S. J. Collins et al., 2014).
D. Strategies to Reduce Errors
Some errors are inevitable, but many are preventable. There are some specific strategies that can be used on the front lines of healthcare to reduce the “holes in the Swiss cheese.”
1. Implement clear, evidence-based policies and procedures.
2. Develop and use checklists, especially in procedural areas (S. J. Collins et al., 2014).
3. Identify situations in which staff commonly use workarounds; adjust workflow to optimize patient safety and system capability, while reducing staff frustration and workflow burden (Seaman & Erlen, 2015).
4. Reduce practice variation and consistently implement evidence-based practice (EBP) by implementing protocols, algorithms, guidelines, clinical pathways and bundles (Buchert & Butler, 2016; Institute for Healthcare Improvement, 2017).
5. Provide distraction-free work zones, particularly in those areas in which high-risk activities occur, such as in medication preparation areas (Connor et al., 2016).
6. Provide simulation training for high-stakes, chaotic, and dynamic situations that have the potential to produce patient harm (Deutsch et al., 2016)
7. Utilize Failure and Mode Effects Analysis (FMEA), which is a tool that proactively assesses processes to determine risk and prevent errors or harm from occurring. FMEA is most often used for new processes or those being significantly revamped. IHI includes a more detailed FMEA template in its QI Essentials Toolkit (available to member institutions at www.ihi.org), but the basic steps are as follows (Institute for Healthcare Improvement, 2017a):
a. Convene a multidisciplinary team.
b. List all of the steps in the process of interest.
c. For each step, determine potential failure modes (how things could go wrong).
d. For each failure mode, determine causes and consequences (why things could go wrong and what would happen if they did).
e. Utilize a scoring scale to determine how likely the event is to happen, how detrimental the consequences would be if it did happen, and how likely current systems would be to detect the failure if it did happen.
f. Assign a summary “risk priority number” based on scores from the scoring scale (likelihood score × severity score × detection score).
g. Prioritize and implement action plans based on those failure modes with the highest risk priority number.
890E. Learning From Event Reporting and Huddles
According to James Reason, creator of the Swiss cheese model, “effective risk management depends crucially on establishing a reporting culture” (Reason, 2000, p. 768). That is, system issues can only be identified when frontline staff report adverse events, near misses, or situations that have the potential to produce patient harm. VRS are commonly used in healthcare organizations to capture these safety events. However, there are multiple barriers to staff use of VRS, including (N. Miller et al., 2017):
1. Fear of the consequences, which can include disciplinary action and negative reactions from patients or colleagues: “I don’t want to be blamed for something that wasn’t my fault.”
2. Lack of feedback about the root cause of the event and how it can be prevented in the future: “No one ever keeps staff in the loop of what’s being done to prevent events like this from happening again.”
3. Lack of understanding about what types of events (i.e., near misses) should be reported: “The patient wasn’t actually harmed, so I don’t need to report it.”
4. Doubt that action will be taken in response to the event report: “Those event reports go into a black hole and no one ever looks at them or does anything about them.”
Hospitals can overcome lack of VRS use by enhancing elements of their safety culture (Table 10.2). Literature suggests that increased voluntary reporting is associated with staff perceptions of feedback about errors, management support for patient safety, organizational learning, a just culture, and teamwork within a given unit (Burlison et al., 2016; N. Miller et al., 2017).
“Huddles” or debriefings may be a useful way to follow up with staff after event reports. This opportunity can allow for event investigation and brainstorming of prevention strategies, and/or can instill a sense of closure after a particularly traumatic event (McQuaid-Hanson & Pian-Smith, 2017). Although debriefing is most often used in simulation training, adverse events can be reduced after inception of a standardized huddle process (Blankenship Harrison, Brandt, Joy, & Simsic, 2016; Morvay et al., 2014).
F. Safety II: The New Frontier
“Safety I” is an umbrella term for the historical approach of patient safety and is characterized by a focus on failures. Safety I seeks to eliminate adverse events through identification of potential or actual adverse events, determination of root cause of the event, and development of plans to mitigate future risk (which often involves training, work standardization, or development of policies and procedures; Braithwaite, Wears, & Hollnagel, 2015; McNab, Bowie, Morrison, & Ross, 2016; Patterson & Deutsch, 2015). However, multiple experts in the patient safety field have begun to question whether the Safety I approach is adequate for healthcare because of the following reasons:
1. Patient safety is not just the absence of adverse events. Without minimizing the seriousness of adverse events, it is clear that most interactions between a patient and the healthcare system actually end in a safe outcome. Patient safety must also be the study of how and when things go right (Braithwaite et al., 2015; McNab et al., 2016; Patterson & Deutsch, 2015).
2. In traditional safety thinking, it is assumed that if the system inputs function correctly (technology, caregivers, protocols, etc.), the system will yield outputs without defect (i.e., patient encounters without adverse events). In fact, there is evidence to the contrary. Even in systems in which components function as intended, adverse events can happen, “due to the way that goals and context change” (McNab et al., 2016, p. 446).
3. Policies, procedures, and standardization of work (via protocols, guidelines, algorithms, etc.) are not fool-proof methods of ensuring patient safety.
a. These tools can rarely account for all context variations and individual patient scenarios. As such, humans will always be looked upon to be agile and innovative in healthcare settings where there is no guidance, or where traditional guidance is not relevant because of the setting or context (i.e., resource limitations, dramatic changes in patient volume, etc.; McNab et al., 2016).
b. In some cases, work standardization (i.e., checklists) can prevent optimal patient outcome, because the checklist is completed without critical thinking and situational awareness. That is, the caregiver’s attention is on the checklist and not the patient. The caregiver may miss subtle clinical signs and may have a false sense of security in the checklist (Patterson & Deutsch, 2015).
c. If protocols are blindly followed for the sake of compliance, the system may miss an opportunity to implement an adaptation that would actually result in an improved patient outcome (McNab et al., 2016).
4. Methodologies used in Safety I were conceived in industries that are less complex and operate in a more linear fashion than healthcare.
891a. Healthcare is an increasingly complex environment with multiple stakeholders, including providers, patients, regulatory organizations, and payers. There are multiple external influences, including the media, the season, and even the weather (McNab et al., 2016).
b. Events in healthcare occur in a nonlinear fashion (Patterson & Deutsch, 2015) versus a car manufacturing plant where A always happens before B, which always occurs before C, and so on.
c. It is generally not possible to break down a healthcare system into its individual components in order to understand the how each part and unit work together (McNab et al., 2016); the whole is more (complex) than the sum of its parts.
d. The industry’s substrates (patients) are unique and may react in different ways to standard work (for instance, a patient may refuse the standard of care or may react different than another patient to a commonly used medication).
e. In healthcare, functioning is usually compared with “work-as-imagined” rather than “work-as-done” (McNab et al., 2016). “Work-as-imagined” often fails to consider the real-world situations that caregivers face, such as staffing shortages, medication and resource shortages, equipment malfunction, patient treatment adherence, and so on (Braithwaite et al., 2015).
f. Because of some of the limitations of Safety I, patient safety experts have begun to advocate for a balanced approach between traditional tenets of Safety I and the more contemporary thinking of “Safety II.” Safety II is the study of why and how things go right in healthcare (Braithwaite et al., 2015; McNab et al., 2016; Patterson & Deutsch, 2015). Safety II was borne of “resilience engineering,” a field that purports that “things usually go right because people adjust their performance to the everyday conditions they face” (McNab et al., 2016, p. 444). Those everyday conditions are “work-as-done.” Some key principles of Safety II include (Braithwaite et al., 2015; McNab et al., 2016; Patterson & Deutsch, 2015):
i. Maximizing the number of events that end successfully
ii. Identifying determinants of success (workarounds, tradeoffs, etc.) that are difficult to pinpoint in complex systems like healthcare
iii. Promoting resilience, which is the ability to react and adjust to changes and “thereby sustain required operations under both expected and unexpected conditions” (Patterson & Deutsch, 2015, p. 387)
iv. Understanding how variations in care may be warranted and should not be seen simply as blatant, unwarranted deviations from the standard of care
g. Safety II is not meant to replace Safety I; rather, the two are meant to exist in a complementary fashion to achieve desired outcomes. It is undoubtedly necessary to understand why adverse events occur and to try to prevent them from happening in the future; however, this approach does not tell the whole story of patient safety. Focusing only on principles of Safety I may cause us to miss the opportunity to learn how caregivers in the healthcare system can adapt in dynamic, unexpected, and resource-constrained situations to achieve optimal outcomes. McNab et al. (2016) and Patterson and Deutsch (2015) suggested the following strategies to introduce Safety II into healthcare:
i. While investigating adverse events, also consider cases that yielded a successful outcome.
ii. Do not automatically assume that standardization is the best way to react to an adverse event. We should “prioritize managing variability rather than simply eliminating it. Flexible ways of working that are beneficial can be encouraged (as long as people are mindful of risks and responsibilities)” (McNab et al., 2016, p. 447).
iii. Incorporate resilience engineering activities into adverse event prevention efforts by practicing skills and critical thinking (i.e., simulation and real-time feedback) and debriefing after events (part four of Case Study 10.1).
CASE STUDY 10.1 Event Response and Follow-Up (Part 4)
As soon as the error was realized, management reports the event in the hospital’s event reporting system. Medication safety leadership begins to investigate the event by reviewing the infusion pump memory and the electronic medical record. A huddle is convened within a week of the incident, and relevant parties work together to discover elements of the event’s context that led to the adverse event (system failures) as well as opportunities in which individuals could have prevented the error. The team also considers other high-risk situations in which staff naturally adapt to prevent an error like this from happening (they attempt to learn from successes). An action plan is developed and staff will be educated about the incident and prevention strategies.
G. Safety as a Domain of Quality
Quality and safety are terms often used together, and indeed the concepts are closely intertwined. Many, including the IOM (2001), consider safety to be an element of quality (Figure 10.3). The World Health Organization [WHO] (2009) defines quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” Although WHO (2009) uses a Safety I-minded definition of patient safety (“the reduction of risk of unnecessary harm associated with healthcare to an acceptable minimum”), we know that an emerging definition of patient safety (in accordance with Safety II) is centered around maximizing the number of healthcare occurrences that end successfully. Patient safety’s new focus on increasing desirable outcomes rather than minimizing failures further aligns healthcare safety with healthcare quality.
As presented in Figure 10.3, the five other quality domains according to IOM (2001) are:
1. Effective. Providing evidence-based care to those patient populations that would benefit
2. Patient Centered. Providing care that takes into consideration the patient’s wants, needs, values, and beliefs and involving the patient as a key decision maker in all treatment decisions
3. Timely. Reducing delays in care provision
4. Efficient. Reducing waste (including equipment, supplies, manpower, time, etc.)
5. Equitable. Providing care to all patients, regardless of gender, socioeconomic status, race, ethnicity, or location
LEARNING FROM ULTRA-SAFE, HIGHRELIABILITY INDUSTRIES
Certain industries, including nuclear power, commercial aviation, and railway transportation, are considered “ultra-safe,” with less than one death per million exposures (Amalberti, Auroy, Berwick, & Barach, 2005). These industries are also considered to be high-reliability organizations (HROs), because they are characterized by operations that are high risk, yet have very few adverse events (Amalberti et al., 2005; Lekka, 2011; Yip & Farmer, 2015). However, this has not always been the case. For instance, in 1959, U.S. airline passengers experienced 40 fatal crashes per 1 million flights. By 1970, this reduced to less than two per 1 million flights, and today is at 0.1 per 1 million flights (S. Collins, 2015). Undoubtedly, improvements in aviation technology have contributed to this drastic improvement in passenger safety, but work done in the area of human factors deserves equal attention, particularly when considering applications of aviation safety work in the field of healthcare.
In the 1970s, the National Aeronautics and Space Administration (NASA) determined that 70% of aviation safety events were due to human errors, more specificly, to failures in teamwork, decision making, and leadership (American Psychological Association, 2014). As such, multiple strategies and tools have emerged to establish a culture of safety in the aviation industry (MacDonald, 2016; Rutherford, 2003).
A. Strategies to Achieve a Culture of Safety in the Aviation Industry
1. An understanding that errors will happen when humans are involved; HROs have a preoccupation with failure (Reason, 2000) and don’t just accept 893that failures are part of doing business (Chassin & Loeb, 2013)
2. Increase in transparency and establishment of the expectation that events and “near misses” be reported (Rutherford, 2003)
3. Increase in standardization of work and processes (reduction in autonomy of the pilot) (Rutherford, 2003) coupled with deference to expertise (usually at the front line and not at the top of the organizational hierarchy), when appropriate (Chassin & Loeb, 2013)
4. Leveraging the power of the team, rather than an individual, to optimize safety and prevent errors
5. “Sensitivity to operations,” which means the organization is in tune to even small anomalies that can evolve into major issues (Chassin & Loeb, 2013)
6. Introduction of crew resource management (CRM), which encompasses use of all available resources (human, data, technology, equipment, etc.) to optimize safety (Rutherford, 2003); rather than focusing on the technical skills of the pilot, CRM teaches the crew how to leverage cognitive (decision making and situational awareness) and interpersonal skills (communication and teamwork) to prevent errors; key principles of CRM include (Crew Resource Management, n.d.-a, n.d.-b):
a. Leadership and teamwork
b. Effective and standardized communication
c. Situational awareness
d. Informed decision making
e. Team briefings and debriefings
f. Conflict resolution
g. Use of critical language
h. Threat and error management
i. Recognition of stress and fatigue
• The stakes are high
• Life-and-death decisions must be made quickly
• Technology is heavily relied upon, but can be lethal when misused
• Operators are highly skilled, independent, and assertive
• Embodies a traditionally hierarchical nature, with deference to the team leader (physician or pilot)
• Overcrowding and service delays
Sources: McKeon, L. M., Cunningham, P. D., & Detty Oswaks, J. S. (2009). Improving patient safety: Patient-focused, high-reliability team training. Journal of Nursing Care Quality, 24(1), 76–82. doi:10.1097/NCQ.0b013e31818f5595; Rutherford, W. (2003). Aviation safety: A model for health care? BMJ Quality and Safety in Health Care, 12, 162–163. doi:10.1136/qhc.12.3.162
B. Translation to Healthcare
There are many similarities between aviation and healthcare (Table 10.3). As such, patient safety experts have sought to translate aviation tools and processes into the healthcare setting (Table 10.4). However, there are also important differences between the two industries. There is a fundamental tradeoff between productivity and safety (sometimes referred to as the efficiency–thoroughness tradeoff [McNab et al., 2016]), and in healthcare, the pendulum does not always swing in the direction of safety. If an airline crew has exceeded work time limits, flights can be delayed or canceled, but if a surgeon has been operating all night or there are staffing shortages, emergency surgeries and care of the ICU patient cannot be delayed because of concerns for patient safety.
The Joint Commission Center for Transforming Healthcare, by working with experts in high-reliability science, suggests the following as key elements for healthcare organizations seeking to become HROs:
1. The leadership of the organization must commit to prioritizing elimination of patient harm. This fulfills one of the characteristics of an HRO—preoccupation with failure and an unrelenting desire to improve performance and safety (Chassin & Loeb, 2013).
2. The organization must embrace and perpetuate a culture of safety. This likely involves assessing the current state using a safety culture survey, but should not stop there. Instead, leaders should use survey results to determine opportunities for improvement in the areas of teamwork, communication, respect, training, and stress recognition (Chassin & Loeb, 2013).
3. The organization must adopt and disseminate effective methods for quality and process improvement. TJC calls this “robust process improvement,” and asserts that a blend of Lean, Six Sigma, and change management methodologies are most effective (Chassin & Loeb, 2013).
Barriers to Implementation in Healthcare
Transparency and event reporting
• Event reporting systems that facilitate identification of system issues
• Safety records are publicly available
• Event reporting is mandatory, not optional
• Providers are hesitant to self-report due to fear of consequences, i.e., litigation, patient reaction, or judgment by peers
Standardization of work
• Use of checklists
• Use of algorithms, decision trees, and protocols
• Idea that the success of the event (flight, surgical procedure, etc.) is not dependent on the players; i.e., one pediatric surgeon is as good as another
• Providers are resistant to a reduction in professional autonomy
• Patients have a relationship with a particular physician and would not willingly accept a substitution
Leveraging the team
• Airline crew members are expected to “speak up” if there is a safety concern; any crew member can halt operations
• Healthcare has a hierarchical nature
Source: Chera, B. S., Mazur, L., Adams, R. D., Kim, H. J., Milowsky, M. I., & Marks, L. B. (2016). Creating a culture of safety within an institution: Walking the walk. Journal of Oncology Practice, 12(10), 880–883. doi:10.1200/jop.2016.012864
APPLYING QI METHODOLOGY TO IMPROVE PATIENT SAFETY
Multiple models exist to facilitate QI efforts (Table 10.5). Six Sigma and Lean are the most recognized and were both developed for manufacturing settings (electronics and automotive, respectively). The Model for Improvement (MFI) was developed more recently for a variety of settings, including healthcare (Langley et al., 2009), and it is the model that has been adopted by IHI. This section focuses on utilization of the MFI to drive change in hospitals, although, in reality, most healthcare organizations actually blend tools and approaches from more than one model.
A. Assemble a Project Team
Prior to beginning QI efforts, it is critical to assemble a project team of relevant stakeholders, subject matter experts, and frontline staff who understand work-as-done. Including the individuals increases buy-in and enhances the project’s likelihood of success (Crowl Sharma, Sorge, & Sorensen, 2015; Dixon & Pearce, 2011). Case Study 10.2 introduces a project to reduce pressure injuries in the ICU setting.
B. Scope and Plan the Project
The MFI is shown in Figure 10.4. The first part of the model involves answering three questions to scope and plan the project, and the second part utilizes multiple, iterative Plan–Do–Study–Act (PDSA) cycles to attempt small tests of change before changes are implemented system-wide.
1. What are we trying to accomplish?
By answering this question, the project team determines the project aim statement (Langley et al., 2009). An effective aim statement should be SMART—specific, measurable, attainable, realistic, and time-bound. Aim statements should identify the population of interest for the project, as well as the baseline and goal levels. The second part of Case Study 10.2 provides an example of an appropriate aim statement.
2. How will we know that a change is an improvement?
The second step involves development of measures for the project. Measurement is a critical part of any QI project, and informs the team if changes are producing the desired result. There are three types of measures commonly used in QI work (Langley et al., 2009):
a. Outcome measures. These quantify how the system impacts patients (i.e., average cholesterol levels, percentage of patients readmitted to the hospital within 7 days of discharge, adverse event rates, and mortality rates).
b. Process measures. These measure whether the components of a system are performing as intended or if processes are completed as intended (i.e., percentage of patients who receive nutrition counseling, percentage of patients who receive standardized medication reconciliation upon discharge, and adherence to medication barcode scanning procedures).
Model for Improvement
Part 1: Addresses three basic questions:
a. What are we trying to improve?
b. What can we measure so we know that we’re making an improvement?
c. What changes can we make to drive improvement?
Part 2: Uses PDSA cycles as small tests of change to rapidly pilot potential interventions in real-world situations
Focuses on reducing process variation to less than 3.4 defects per million opportunities
Focuses on culture and behavior change to identify and eliminate waste in a process
PDSA, plan, do, study, act.
CASE STUDY 10.2 An Example of Quality-Improvement Project to Reduce Pressure Injuries
In the pediatric ICU (PICU) at a Children’s Hospital, the pressure injury rate is two injuries per 100 patient days, which is almost twice the benchmark rate for other children’s hospitals. The nursing leadership of the PICU determines this to be a focus area for QI efforts. To begin the project, the PICU manager assembles a team of bedside nurses, attending physicians, advanced practice nurses (APNs), respiratory therapists, and wound/ostomy care nurses, all of whom are passionate about reducing preventable harm.
CASE STUDY 10.2 Sample Aim Statement for a Pressure Injury Reduction Project (Part 2)
The PICU pressure injury team develops the following aim statement for their project:
Reduce pressure injury (stage II–IV, unstageable, and deep tissue injury) rates in the pediatric ICU population from a baseline of two injuries per 100 patient days to a goal of one injury per 100 patient days in the next 12 months and sustain this change for 12 months.
c. Balancing measures. These measure whether changes to one part of the system inadvertently and negatively affect another part of the system (i.e., by decreasing hospital readmission rates, we inadvertently increase hospital length of stay).
3. What changes can we make that will result in improvement?
Teams can determine potential interventions by first exploring why the system isn’t currently performing at desired levels. Multiple tools can be employed; three of the most commonly used are presented here (Langley et al., 2009):
896a. Fishbone (Ishikawa or cause-and-effect) diagrams (Figure 10.5). This type of diagram categories potential reasons for suboptimal performance into like categories. Not all individual reasons or whole categories are modifiable or can have associated interventions, but it is useful to put them on the diagram in order to understand all of the factors that affect the system.
b. Process-flow map. This is a flow chart that depicts the steps in a process and is useful in determining waste or inefficient processes.
c. Pareto charts (Figure 10.6). This type of chart quantitatively depicts characteristics of a given system, such as reasons for pressure injuries, and can help a project team determine where to target initial interventions. The 897Pareto rule states that 80% of problems (or variations) are due to 20% of reasons (part 3 of Case Study 10.2).
Dixon and Pearce (2011) provide additional tools that can be used in healthcare QI, as well as a matrix that will help the reader decide which tools are helpful in which improvement situations. For member institutions, IHI offers a QI Essentials Tooklit, which contains templates for common QI tools, on their website (www.ihi.org).
The results borne of these tools can help the group determine the project’s key drivers, which are the main leverage points that, when optimized, should result in aim achievement. Key drivers are the “whats.” Once key drivers are determined, interventions (the “hows”) relating to each driver can be determined and prioritized. All of these elements can be depicted on a key driver diagram (KDD; Figure 10.7), which serves as an ever-evolving, dynamic project roadmap (fourth part of Case Study 10.2).
898C. Begin Small Tests of Change
A PDSA cycle is a powerful tool used to rapidly test potential process changes and interventions and to obtain knowledge to answer any of the three questions that are part of the MFI. Of note, some interventions will not require PDSA cycles, but many will, especially those that are complex and affect many stakeholders. PDSA cycles are particularly useful in understanding whether a particular intervention has the capability to produce desired improvement in a certain context and 899to evaluate the unintended consequences and costs of a process change. Cycles can begin with an “n of 1,” and ramp up in an iterative fashion, affecting more patients and broader contexts (Langley et al., 2009). Unlike traditional research methodology, PDSA cycles do not require one large, controlled, randomized intervention. Rather, the interventions evolve over time after collecting data and learning from prior cycles (IHI, n.d.-a).
CASE STUDY 10.2 Measures for a Pressure Injury Reduction Project (Part 3)
The PICU pressure injury team chooses the following measures for their project:
Outcome measure: Pressure injury rate per 100 PICU patient days
Process measure: Percentage of patients following the evidence-based practice prevention bundle
Balancing measure: Unplanned extubation rate associated with increased frequency of patient turning and repositioning
CASE STUDY 10.2 Intervention Planning for the Pressure Injury Reduction Project (Part 4)
The PICU pressure injury team brainstorms potential reasons for events in their population and they develop the cause-and-effect diagram seen in Figure 10.6. The Pareto chart (Figure 10.7) is created based on the baseline data and shows that about 80% of the injuries are caused by three reasons: immobility, nasogastric tubes, and respiratory devices. Using the categories in the cause-and-effect diagram, key drivers are identified and a KDD (Figure 10.8) is created. Based on the most common causes of pressure injuries, the group brainstorms and prioritizes interventions in order to begin its first PDSA cycle.
CASE STUDY 10.2 A Plan–Do–Study–Act Cycle for Pressure Injury Reduction (Part 5)
Because most of the pressure injuries in the PICU are due to immobility, the PICU pressure injury team decides to design and test an intervention that would affect immobility-related pressure injuries. More specific, they decide to test a turning schedule, which relates both to their key drivers of staff engagement and education and evidence-based procedures and practices. Their PDSA cycle happens as follows:
a. Objective: Test the implementation of a turning schedule on PICU patients
b. Who: Sarah Smith, RN
c. What: Turn one PICU patient every 2 hours
d. When: During her 12-hour shift on February 1
e. Hypothesis: We hypothesize that Sarah will be successful in turning her dayshift patient every 2 hours
a. On February 1, Sarah completed the test of change with a 1-month-old infant with increased work of breathing due to respiratory syncytial virus.
a. At first, Sarah found it hard to remember to turn the patient every 2 hours. This became easier once she added it to her to-do list. The patient’s mother was hesitant for Sarah to disturb the baby while he was sleeping.
a. Modify the turning protocol to every 3 to 4 hours for infants, so that turning can be aligned with other cares, like feedings and diaper changes.
b. Sarah will try the new schedule with her next infant patient and will enlist another bedside nurse to implement the turning schedule with one of her patients.
The PDSA cycles continue until the new turning schedule (every 2 hours for older kids and every 3 to 4 hours for infants) is spread throughout the PICU. Over time, the PICU finds stage I pressure injuries earlier than before, and can implement interventions to prevent progression to stage II and higher. Over time, their rate of pressure injury due to immobility decreases, and other critical care units in the hospital become interested in implementing a similar turning schedule. The PICU moves on to PDSA other interventions that could prevent injuries due to nasogastric tube and respiratory device securement. The team tracks the outcome measure (pressure injury rate), the process measure (adherence to the turning schedule protocol), and balancing measures (rate of unplanned extubations due to patient repositioning).
The steps of a PDSA cycle are outlined as follows and are exemplified in the fifth part of Case Study 10.2 (Langley et al., 2009):
a. Determine the purpose of the test.
b. Generate a hypothesis about what will happen and why.
c. Develop a plan (who, what, where, and when) to implement the test of change.
a. Try out the test on a small scale (i.e., one patient, one clinic session, one provider, one set of morning rounds, etc.).
b. Document the outcome.
a. Compare the data with the prediction.
b. Summarize the findings.
a. Adjust the change based on the results of the “study.”
b. Plan for the next test, which could involve:
i. Retooling the intervention and trying again on the same scale
ii. Refining the intervention and trying again on a slightly larger scale
iii. Spreading the intervention in varied contexts
D. Measure Changes Over Time
The most widely used method of tracking changes in data over time is the run chart (sixth part of Case Study 10.2; see Figure 10.8). A run chart is a graphical display of data organized in sequence, with the x-axis representing a unit of measure (months, quarters, days, or consecutive patients) and the y-axis representing the measure being tracked (infection rate, percentage of patients adherent to a treatment bundle, or satisfaction scores, etc.). Run charts are powerful because data are plotted over time, so QI teams can see changes in measures in relation to PDSA cycles and intervention implementation. In addition, established rules of run chart interpretation allow project teams to distinguish between random and nonrandom variation in their data. Identification of nonrandom variation signals a true change in the system, which could be related to intended changes as part of a QI project or to confounding variables. Two of the most common signals of nonrandom variation include the shift and the trend (Perla, Provost, & Murray, 2011).