It’s My Fault: Understanding the Role of Personal Accountability, Mental Models and Systems in Managing Sentinel Events



Fig. 39.1
RCA time line of event



The second meeting to ascertain why the event occurred never happened as Kelly’s self-assignment of blame was accepted as the root cause. Table 39.1 lists the systems issues identified by the RCA team in the group meeting as compared to Kelly’s interview.


Table 39.1
Comparison of factors from RCA and the clinician interview





































Contributing factors identified during RCA

Factors identified from clinician interview

Inadequate space in the ORs with robots

Financial decision changes operative location for high-risk patients

Patient is moved off OR table in acute distress

Anesthesiologists aren’t informed of the change for high-risk patients

Inadequate resources for moving obese patients

Forms for assignment of patient’s operative location aren’t changed

No established process for patients that can’t be ventilated or intubated

Intubated patients in MISS PACU are recovered in OR

Distance for hospital resuscitation team to reach the MISS is excessively long

Lack of respiratory therapy support for MISS patients

Cricoid insertion fails when excessive neck adipose tissue and internal swelling are present

Lack of access to main PACU level of care
 
Need to justify clinical decisions which impact financial outcomes
 
Charges assigned to individuals for lost productivity
 
Production pressures

The change to the operative location for high-risk patients uniquely contributed to the event. The hospital had one robot for urology and one for gynecology. The MISS performed robotic procedures for low-risk patients and the main OR performed the higher-risk patients (i.e., ASA class 3–4 patients). The robots were moved between MISS and the main OR to accommodate the cases. Transporting this expensive equipment two blocks on and off elevators resulted in damage with costly repairs and equipment downtime. Cases were reassigned to a non-robotic approach as the equipment malfunctioned or was found damaged at the start of the case. The surgeons submitted a request to the Resource Analysis Committee (RAC) for more robots to eliminate the need for cross campus relocation. The committee was comprised of financial and administrative staff that reviewed the fiscal implications for new programs, expensive equipment purchases, processes that met outlier criteria for higher-than-expected costs and any other high-cost problems referred for review. There were no clinicians on the committee as the focus was financial rather than clinical. The resource committee referred the request for the purchase of the two additional robots to the capital strategic planning committee which meets annually to align the purchase of expensive technology with programmatic mission. The strategic capital committee wouldn’t be considering the purchase for several months and if approved, it would be several more months before it arrived.

In the interim, the resource committee recommended that the equipment be permanently located in the MISS where all cases would be performed. This would reduce the costs associated with repairs, lost equipment time, and rescheduled procedures. In theory, the resource committee made recommendations for the clinicians’ consideration. In reality, the power brokers who sat on the committee viewed challenges to their decisions as a lack of commitment to the organization’s fiscal viability . The word on the street was to comply rather than engage in a futile argument. The two surgical chairs from urology and gynecology notified the affected surgeons that going forward all robotic urology and gynecology cases would be performed in the MISS. The anesthesiologists weren’t included in this communication. The anesthesiologist who screened Evelyn in preadmission testing indicated on the form that the procedure was to be performed in the main OR, unaware that the OR assignment would be ignored.



The Role of Mental Models


It was 2 weeks from decision to impact. The screening anesthesiologist had refined the MISS triage criteria to accurately identify low-risk patients with exquisite precision. In post-event reviews no one could recall the last time a patient was sent to the main PACU for extended ventilation or other problems. Kelly’s knowledge that only low-risk patients had surgery in MISS supported his mental model in care delivery to Evelyn. Mental models are formed by the individuals’ professional knowledge, the experience, and the systems in which they work (i.e., group dynamics, organizational rules, managerial implementation of work practices, and institutional culture) [2, 1519]. They constitute a person’s beliefs about how to respond in a given situation, converting organizational policies and procedures into a functional reality. Mental models are incomplete, unstable, dynamic, and evolving and contain gaps as clinicians cope with the messy, uncertain complexities of clinical practice [2, 1519]. Components of Kelly’s mental model included the organization’s emphasis on efficient throughput, the lack of resources to manage patients on a ventilator, the screening process that ensured only low-risk patients had surgery in the MISS, production pressures with punitive enforcement, and an organizational culture that valued financial priorities. His mental model was deeply entrenched in his subconscious and gave rise to a pre-compiled response [20].

A pre-compiled response has been described as “recognition-primed decision-making ” acquired through personal experience [21]. In other words, our prior interactions build patterned responses in similar situations. Pre-compiled responses are quick, intuitive, carry a low cognitive burden and are highly effective in familiar situations [21]. Evelyn was successfully intubated and her procedure was uneventful. Kelly reflexively extubated her just as he had in hundreds of patients before. His recognition-primed decision-making was for low-risk patients seen every day in MISS , unaware that Evelyn didn’t fit this picture. When Evelyn struggled to sit up Kelly’s response was to assist her. Because Evelyn’s distress was so immediate, he had no time to process a change to his mental model. Once the new, unexpected reality of the situation registered however, critical thinking kicked in. He deployed the resuscitation team and summoned help. Given the limitations of the MISS environment, his management of Evelyn’s distress was appropriate. To achieve a different outcome, Evelyn should have remained intubated until her airway swelling resolved, or clinicians skilled in surgical airway procedures should have been present during her extubation. This would have required an awareness of Evelyn’s risk status and collaborative preplanning prior to her surgery. This form of system redesign is intended to create a new mental model. Successful system redesign requires detecting the contributory faulty systems and thinking about how the new system will confer a different metal model on the providers.


Discovering Flawed Systems


Systems are the foundations of our mental models dictating how clinicians respond in a given situation [2, 1519, 21]. Organizational learning about how to prevent future harm emerges from the discovery of how individuals transform systems “from work as imagined to work as actually performed” (i.e., their mental models) [21]. Uncovering how clinicians navigate the systems that the organization designed requires a nonjudgmental approach [2, 13, 14, 22, 23]. While organizations articulate that they are seeking systems and avoid blaming individuals, frequently they miss the mark sending subtle signals of liability and implied censure under the guise of accountability. An unintended consequence of accountability is to drive blame underground making it more difficult to recognize and avoid. A physician who served on the serious adverse event reporting committee at his hospital commented in 2015 that “we’ve really made progress with our RCAs. We now ask why five times until we find who did it.” When serious harm has transpired, self-blame and fear are inevitable [25]. The investigator’s approach to clinicians will determine if these feelings are intensified or abated. Using non-blaming language and clarifying the goal are intended to reduce the anxiety of the interview process [24]. Designating it as an event debrief, rather than incident investigation, may be less threatening [25]. Articulating that the investigation is seeking flawed systems transfers the focus from the individual to the organization. One researcher has suggested that renaming the individuals investigating adverse events as organizational learning specialists may reduce fear and improve information sharing [26].

Uncovering system flaws starts with understanding the perceptions of the participants and why they responded as they did. Reliance on the clinicians’ acknowledgment of responsibility or explanation of the event is an error-prone approach as the involved practitioners frequently don’t understand or misremember what happened [13, 14, 2733]. Research has shown that 40 % of all decisions are habits that occur without conscious input [30]. Workers constantly make decisions, frequently unaware that they are responding to the systems in which they themselves are embedded [21, 23, 25, 3036]. The context of the surrounding events matter, but the involved individual may not recognize their relevance [2, 11, 14, 23, 26, 34, 35]. When decisions are lost to the subconscious, clinicians can’t tell you why they performed an action [2934, 36], rendering the “five why questions” mostly ineffective. A better approach is to reconstruct the real world with its competing demands and barriers that conspired to derail success [13, 14, 23]. Seeking to determine what went wrong by challenging clinicians as to why they didn’t follow the correct course of action transforms the investigation into a blaming event and clinicians recoil in defense [13, 14, 23]. Information sharing quickly shuts down which may shape future behaviors for clinicians, especially trainees [31]. Instead patient safety practitioners should consider guiding the frontline clinicians through a detailed story telling while avoiding drawing conclusions. These investigators tirelessly pursue, in exhaustive detail, the circumstances surrounding the incident in order to understand why the clinicians acted as they did [13, 14, 23].

The real challenge is to reconstruct the reality of the world at the time of the event without introducing the new post-event reality [32]. This form of incident investigation seeks the perspective of the clinicians by looking forward through their eyes, reconstructing the assumptions and thought processes before disaster struck, instead of looking backward from the error [13, 14, 23].

The flawed systems reside in the mental models that made so much sense before life fell apart. Seeking one absolute version of the event forces a decision about who is lying and who is telling the truth when in reality this determination is not only rarely possible, but creates more fear and silence. Mental models are imperfect and are designed to be more functional than technically accurate [15, 18, 19, 21]. In addition, they may differ between individuals, creating inconsistent viewpoints of what transpired. Discrepant stories can be a rich source of organizational learning as they frequently represent goal conflicts experienced during the unfolding event. Varied accounts, like a Rashomon-like investigation, should be viewed as clues that can advance understanding and learning [13, 14, 23].


The Story Continues


Evelyn never regained cognitive function. She was weaned off the ventilator and able to breathe on her own. Tube feedings sustained her life. After 3 months in an acute care setting, she was sent to a traumatic brain injury unit to enhance cognitive recovery. After 9 months with no appreciable change, she was sent to a nursing home. The hospital negotiated a multimillion dollar settlement. Evelyn’s heartbroken family remained devoted to her and at the time of settlement continued to harbor tremendous anger. The event triggered the purchase of two new robots that arrived within 3 months. High-risk patients were scheduled only in the main OR and the robots remained in MISS. There was a hiatus of high-risk robotic cases while awaiting the arrival of the new equipment. A new senior leadership team, knowledgeable about patient safety concepts, arrived just a few months prior to Evelyn’s surgery. They began changing the organizational culture. The resource allocation committee was disbanded and a new patient safety finance committee was convened. It consisted of financial, clinical, and administrative senior leaders as well as board members from the quality and finance committees. Clinicians were invited to make presentations and financial decisions became patient centered and collaborative. The monitoring of clinicians for wasted OR time was suspended pending reassessment. It was reinstated after 6 months with a focus on organizational systems (i.e., barriers clinicians encountered that interfered with meeting productivity targets). Monitoring to identify outlier performers resumed but financial charges to individuals and departments did not. Kelly Stone continued his distinguished career in anesthesiology.

Clinical practice lagged behind the other organizational changes. Evelyn’s weight was the harbinger of an emerging era in healthcare that went unappreciated. The organization attributed her extreme obesity as a “one off” and processes to manage it weren’t developed. It would be another 5 years before the anesthesiology’s guidelines for obstructive sleep apnea would be published. It would be closer to a decade before the need to manage patients who can’t be intubated and can’t be ventilated would move to the forefront of care. Evelyn’s case is yet another example of clinical practice changing faster than the science to support it. And yet, the clinicians on the front lines are expected to perform within the highest standards that will ensure a positive outcome. Only years later was the significance of Evelyn’s case recognized and practice guidelines developed.


Accountability


Does this case study illustrate that if the systems are at fault that individual accountability doesn’t matter? That depends. Accountability is about how rule breaking is perceived and managed. To answer this question requires an understanding of the beliefs and values surrounding rule breaking . In the wake of an adverse event, it is common to identify a missed step in the process or a broken rule as the cause. Invoking sanctions for omissions or rule breaking is seen as holding individuals accountable. Rule enforcement effectively communicates high standards when an individual purposely disregards a good rule [9, 29, 37]. When the rule breaking is unintentional, the same process is a blaming behavior [29, 37]. If only Kelly had been more careful in following the basic rules of airway management, Evelyn might not have sustained brain damage. Holding him accountable for following a rule he never intended to break is punishing human error.

A strongly held belief supporting sanctions is the myth of personal control [2729]. This view sees the individual’s actions as separate from and independent of the surrounding environment. It is consistent with the traditional view of the practitioner as solely responsible for the care and outcomes of the patient [2, 3842]. Responsibility for decision-making is seen as a personal choice [2, 23, 2528], and there is a lack of appreciation that practitioners are responding to the context in which they work [2, 11, 13, 14, 16, 19, 22, 27, 28, 42]. The myth of personal control is a form of denial that deflects the responsibility away from the organization, thereby limiting learning [2, 11, 13, 14, 27, 28]. If the RCA had ended with the monitoring of Kelly’s performance, many key systems for this adverse event would have been missed including the inadequate number of robots, the role of the resource allocation committee in decision-making about clinical care, and the emphasis on financial priorities. These flawed systems might never have been identified and corrected. When the story begins and ends with the person, there is nothing to be learned or improved.

But doesn’t this support that it is always the system and never the person? The answer is no in a just culture . A just culture is an open and fair approach to human error that supports learning after an adverse event [2729, 37]. Sanctions are rarely invoked in healthcare as workers almost never break rules with malevolent intent. Intentional rule breaking is commonplace to accommodate variation in care delivery [43].

For example, dual identifiers using the patient identification bracelet are mandated at the time of medication administration. Anesthesiologists during operative procedures, and resuscitation teams during a cardiac arrest, omit patient identification as the risk of misidentification is eliminated when caring for one patient. This intentional rule breaking is intended to save time by eliminating a non-value-added activity. Clinicians that save time by omitting the intravenous line port disinfection are exposing patients to a possible blood stream infection. In this situation, the intentional rule breaking isn’t intended to improve patient care and sanctions will communicate organizational value for this activity. The worker, who forgets to sanitize his hands and does so in response to a colleague’s prompt, shouldn’t be punished. Clinicians, who refuse to perform hand hygiene in response to a prompt, should be sanctioned. Intentionality matters and is integral to determining when punishment is appropriate. In a just culture, human error (i.e., unintentional rule breaking) isn’t punished but egregious rule breaking is.

Two separate surgeons left the operating room when the sponge count was wrong and the film was still pending. The resident misread the X-ray, the retained sponge went undetected and both patients had a second procedure to remove it. The rule is that the attending must remain in the OR until the count has been reconciled. Since the surgeons left the OR in violation of the rule, should they be punished? The answer requires understanding the context of their decision. In one case, the attending left the OR to assist in rescuing a patient with a vascular injury during robotic surgery . His prompt response saved the other patient’s life. In the second case, the surgeon left for the airport to meet his family for vacation. When the procedure ran later than anticipated, he failed to arrange coverage with a colleague. In a just culture the first surgeon shouldn’t be sanctioned, but the second surgeon should be. The first surgeon’s rule breaking was intended to improve care while the second surgeon’s was not. In both instances changing the system to ensure an attending radiologist reviews the film when the attending surgeon is unavailable would ensure timely detection of the retained sponge. Even when rule breaking occurs, systems should be assessed for improvement opportunities.


Root Cause Analysis


Is the RCA process capable of transforming the tragedy of Evelyn’s harm into system redesign that would save the next patient? Understanding what the research has to say about the strengths and weakness of the RCA process informs the answer. The RCA process begins with the notification about the event and the interviews of participants [12, 44, 45]. It has been noted that “You only have 24 hours to uncover the naked truth. After that, it will be all dressed up and ready for the party that is about to begin” [46, p. 3]. Stories evolve with repetitive telling [23].

As the horror of the event unravels within the caregivers’ minds, their perceptions are altered and reshaped [47]. Interviewing staff as close to the event as possible, is crucial to the discovery of the mental models in play at that time [23, 45]. In addition, TCIMC often did group interviews such as the one where Kelly accepted responsibility for the adverse outcome. The goal was to understand the shared mental models during the event. After the group interview, the involved clinicians were invited to participate in the RCA to identity the systems issues and develop corrective action plans. Attendance was optional and a clinician’s decision to participate or not was respected. The group interview was very helpful in clarifying issues and completing gaps in the individual interviews. There were no records of attendance at the RCAs so it isn’t possible to know how often the clinicians accepted the invitation to participate. Those who did participate said that they attended in the hopes that something good could come out of the event so that it would never happen again. Anecdotally, these clinicians reported that action plans were very important to them.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 1, 2017 | Posted by in NURSING | Comments Off on It’s My Fault: Understanding the Role of Personal Accountability, Mental Models and Systems in Managing Sentinel Events

Full access? Get Clinical Tree

Get Clinical Tree app for offline access