Organizational and Cultural Determinants of Surgical Safety




© Springer International Publishing Switzerland 2017
Juan A. Sanchez, Paul Barach, Julie K. Johnson and Jeffrey P. Jacobs (eds.)Surgical Patient Care10.1007/978-3-319-44010-1_11


11. Organizational and Cultural Determinants of Surgical Safety



Kathleen M. Sutcliffe 


(1)
Carey Business School, Johns Hopkins University, 100 International Drive, Baltimore, MD 21202, USA

 



 

Kathleen M. Sutcliffe



Keywords
Organizational determinants of safetyCultural determinants of safetyHigh-reliability organizingOrganizational cultureSafety cultureSafety climate


“Judges possessing outcome knowledge may, for example, tend to reverse their temporal perspective and produce scenarios that proceed backward in time, from the outcome to the preceding situation. Such scenario retrodiction may effectively obscure the ways in which events might have taken place, much as solving a maze backward can obscure the ways in which one might have gotten lost entering from the beginning.”

—Fischoff, 1975, p. 298



Introduction


This chapter explores some fundamental ideas about organizational and cultural determinants of surgical safety. We propose that the success of individuals and teams involved in providing safe and reliable care is more or less fueled by organizing processes and the cultures in which caregivers are embedded. By privileging process and culture we offer a systemic lens on the underpinnings of safety in complex healthcare systems and move beyond medicine’s prevailing focus on individual excellence and achievement as the sole means to assuring safe and reliable care.

The ideas discussed in this chapter derive from years of research exploring the problem of safety in complex sociotechnical systems in disciplines such as organization and management theory, cognitive psychology, sociology, and human factors engineering. Research from these disciplines over the past two decades, possibly as a consequence of the IOM’s To Err is Human [1] advising health care organizations to attend to the wisdom of organizations in high-hazard industries, has begun to penetrate the patient safety literature [2]. This cumulative body of research provides some insight into how organizing and culture might enable safe care. Although health care has enthusiastically sought to craft interventions based on this research, the enthusiasm for interventions in some cases has outstripped the evidence supporting them ([3]: 1). That is, even with respect for the best of intentions this enthusiasm sometimes has led to superficial application of particular ideas without a solid grasp either of the underlying concepts or the mechanisms through which they exert their influence [4, 5].

In this chapter we aim to remedy this state of affairs . We are mindful that innovations are best designed by people who have deep contextual knowledge and are close to the work. Thus, we do not aim to be prescriptive. Rather the intention is to provide a general and wide-ranging overview of some basics related to processes of organizing and culture. By enriching understanding of these essentials, we hope that clinicians will be better prepared to contextualize these ideas and more successfully apply them to their own surgical care improvement efforts.

The chapter unfolds as follows. We start by examining some basic assumptions related to the challenges of achieving safety in complex, dynamic, open systems. We follow with a discussion of two orientations toward safety, essential organizational processes and practices, and evidence linking these to outcomes. We then turn to the concepts of safety culture and safety climate . We explore how they are defined, how they exert their influence, and how culture and climates are enabled, enacted, and elaborated. We follow with some evidence linking safety climate and outcomes. We end with some implications for practice and concluding comments.


Open System Assumptions


It is important to keep our eye on some key assumptions about complex sociotechnical systems and their safety, as they are critical for understanding the bounds of organizational and cultural interventions . First, when people in health care refer to systems or systemic error, they often have in mind a rational closed mechanical system comprised of explicit roles, rules, routines, and relationships intentionally created to achieve some well-defined objective. In closed systems, “goals are known, tasks are repetitive, output of the production process somehow disappears, and resources in uniform qualities are available” ([6]: 5). But health care systems defy that description. Viewing systems as closed or mechanical misses the fact that much medical care is delivered by transient, temporary teams, assembled in various contexts (e.g., the operating room or at the bedside), and often with new or unfamiliar players (e.g., rotating interns/residents, floating nurses) ([7]: 169). Transient systems have to be continually reconstituted. Viewing systems as closed also overlooks the fact of equifinality—meaning that the same results may be achieved with different initial conditions and through many different paths or trajectories. Although health care organizations are loosely coupled [8] in the sense that their various parts work fairly independently, patient outcomes often are determined by the combined product of these constituent loosely coupled parts.

A second important assumption is that system safety is an illusory concept. There are no safe systems/organizations if only because past performance cannot determine the future safety of any entity [9, 10]. Safety is a moving target: A good day yesterday does not necessarily mean a good day today.

Third, safety is a dynamic non-event [11]. It is dynamic in the sense that safety is preserved by timely human adjustments; that is, problems are fleetingly under control due to compensating adaptations. It is a nonevent because successful outcomes rarely call attention to themselves. In other words because safe outcomes do not deviate from what is expected, safety is in some ways invisible. When there is nothing to capture people’s attention, they see nothing and they presume that nothing is happening and that nothing will continue to happen if they continue to act as they have acted before.

A fourth assumption is that adverse events and outcomes in health care sometimes occur because of mistakes in performance and execution, but mistakes in perception, conception, and understanding more often lead to unsafe conditions and ultimately to greater harm [12, 13]. This is nicely captured by sociologist Marianne Paget’s [14] observation that medical work unfolds in real time and is “an error-ridden activity … inaccurate and practiced with considerable unpredictability and risk.”

Finally, most accidents and failures in complex systems are not the result of the actions of any single individual (even though there is a tendency to blame single individuals). Nor are they the result of a single cause [15]. Small incidents often link together and expand [10]. This is why it is important to be able to catch and correct small mistakes and errors before they grow bigger. When problems are small, there are often more ways to solve them. When they get bigger, they tend to get entangled with other problems and there are fewer options left to resolve them.

Together these assumptions highlight the challenges of safety and reliability in complex systems (see Box 11.1). Achieving safe and reliable outcomes in error-ridden, unpredictable open systems such as those found in health care means accepting the realities of dependence, loose connections, keeping up with environmental demands, redoing processes and structures that keep unraveling, and expecting the unexpected [16]. But that doesn’t mean that people who inhabit those systems are left helpless. In the following section we explore organizational determinants, particularly organizing processes and their role in producing dynamic nonevents [17].


Box 11.1: Safety Challenges in Complex Open Systems





  • There are big differences between closed and open systems and these matter for safe care. Health care systems are open and loosely coupled; their various parts work independently, but outcomes are determined by the product of these parts.


  • In open systems there is equifinality; the same results may be achieved with different initial conditions and through many different paths. There is no one right way to organize in open systems.


  • System safety is an illusory concept. There are no safe systems and organizations because past performance cannot determine the safety of any entity.


  • Safety is a dynamic nonevent. Safety is dynamically preserved by timely human adjustments. Safety is a nonevent because successful outcomes do not call attention to themselves. Just because nothing is happening does not mean that nothing is being done to make that happen. We never have a complete understanding of all the factors that are keeping a unit/organization safe.


  • Medical work is a dynamic unfolding activity. Mishaps and adverse outcomes may be a result of problems with execution and performance, but misperceptions, misconceptions, and misunderstandings ultimately lead to greater harm.


  • Most accidents and failures do not result from a single cause or the actions of a single individual. Small incidents often link together and expand. It is important to catch problems in their early stages when there are more ways to solve them. As they get bigger the solution space gets smaller.


Safety in Health Care: The Role of Organizing Processes


Researchers have identified a number of properties of safe and reliable organizations. Although the specific attributes vary between studies, there are a number of commonalities. Many properties such as good technology and task and work design, highly trained personnel, well-designed reward systems, continual training, frequent process audits, and continuous improvement efforts are ubiquitous. Research outside of health care for several decades has linked bundles of these properties to higher performance [18], and research in health care also suggests that these elements matter. For example, in a study of 95 hospital nursing units Vogus and Iacobucci [19] found that the use of a bundle of organizational work practices that included rigorous selection of employees (particularly for interpersonal skills), extensive and regular training and development, and continuous work process improvement activities was directly and indirectly associated with fewer medication errors and patient falls . These basic organizational features, similar to those that one would find in any high-performing organization, although necessary to safety and reliability are not sufficient. Although these properties may provide the scaffolding for other critical organizational processes and outcomes [19], in some ways we might think of them as contingencies or boundary conditions. Their presence (or absence) strengthens (weakens) the effects of other determinants. Consequently, in this chapter we are more concerned with the distinctive properties found in what are known as high-reliability organizations (HROs) , prototypical organizations such as aircraft carriers, air traffic control (and commercial aviation more generally), and nuclear power-generation plants (see [2022]) that operate complex technologies in complex, dynamic, interdependent, and time-pressured social and political environments.

Although diverse, studies have shown that these high-risk organizations share a set of operating commonalities and characteristics that enable nearly error-free performance in settings in which errors should be plentiful (see Box 11.2). HROs possess highly trained personnel, continuous training, effective reward systems, frequent process audits, and continuous improvement efforts. But, more distinctively, the most highly reliable organizations are characterized by organizational processes and practices that foster an organization-wide sense of vulnerability; a widely distributed sense of responsibility and accountability for reliability; widespread concern about misperception, misconception, and misunderstanding that is generalized across a wide set of tasks, operations, and assumptions; pessimism about possible failures; redundancy; and a variety of checks and counter checks as a precaution against potential mistakes. In part, these distinctive capabilities emerge from two complementary logics to which we now turn.


Box 11.2: Attributes of Highly Reliable Organizations





  • HROs exhibit attributes found in most high-performing organizations including:



    • Outstanding technology and task and work design


    • Exquisite selection mechanisms and highly trained personnel


    • Effective reward systems


    • Continuous training


    • Frequent process audits and continuous improvement efforts


  • HROs have distinctive properties including:



    • An organization-wide sense of vulnerability


    • A widely distributed sense of responsibility and accountability


    • Widespread concern and pessimism about misperception, misconception, and misunderstanding that is generalized across tasks, operations, and assumptions


    • Redundancy and a variety of checks and counter checks


    • A climate and culture of trust and respect


    • Heedful coordination among people/units both upstream and downstream


    • Habits of thought and action aimed at:



      • Examining failure as a window on the health of the system


      • Avoiding simplified assumptions about the world


      • Being sensitive to current unfolding situations


      • Developing resilience to manage unexpected surprises


      • Locating expertise and creating mechanisms for decisions to migrate to those experts


Two Approaches to Safety Management


Broadly speaking, complex organizations pursue two basic logics to manage risks and achieve safe and reliable (i.e., continually error free) performance. Wildavsky [23] contrasts these logics and Schulman [12] analyzes them as they pertain to health care. The first logic is one of anticipation/prevention. The second logic is one of resilience/containment. We outline these two basic orientations in the following paragraphs.

Anticipation/prevention . Advocates of anticipation suggest that errors can be eradicated or precluded—that intolerance (e.g., zero defects) of preventable harm is desirable and achievable [24] by using tools of science and technology to better control the behavior of organizational members to perform safely and effectively. This requires organizational members and other stakeholders (e.g., public, regulators) to define and identify the events and occurrences that must not happen, identify all possible causal precursor events or conditions that may lead to them, and then create a set of detailed operating procedures, contingency plans, rules, protocols, and guidelines for avoiding or preventing them. A commitment to anticipation and prevention removes uncertainty; reduces the amount of information that people have to process, which potentially decreases the chances of memory lapses, judgment errors, or other biases that can contribute to crucial failures; provides a pretext for learning; protects individuals against blame; discourages private informal modifications that are not widely disseminated; and provides a focus for any changes and updates in procedures [25].

The logic of anticipation/prevention is based on Perrow’s [26] notion of second-order behavioral controls. Perrow [26] classifies control mechanisms into first order, second order, and third order. First-order controls such as direct supervision, inspection, or surveillance, although they are expensive and reactive, are straightforward and obtrusive means for controlling behavior. Second-order controls (i.e., bureaucratic controls) such as standardization, specialization, and hierarchy are more efficient than direct controls and are less obtrusive. In theory, they work by reducing the range of stimuli people have to attend to so that they have fewer opportunities to make decisions that maximize personal interests rather than the organization’s interests. Third-order controls, also known as control through culture (to be discussed more fully later in this chapter), are fully unobtrusive and work by controlling the cognitive premises (e.g., norms, assumptions, values, and beliefs) that underlie action.

The idea behind second-order control is that consistent error-free outcomes will be produced in the future if people repeat patterns of activity that have worked in the past. In routine, stable, certain situations, where tasks are analyzable and repetitive actions can be identified and predictably will lead to desired outcomes, a logic of anticipation makes sense. Naturally this description fits some tasks, work roles, and work settings (e.g., laboratories, pharmacies) better than others. But, it may not fit all. Certainly, recent research demonstrates the value of behavioral routines (e.g., checklists) and standardizing work (e.g., [27]). But, in nonroutine situations it is sometimes impossible to write detailed operating procedures to anticipate all the situations and conditions that shape people’s work. Moreover, even if procedures could be written for every situation there are costs of added complexity that come with too many rules. This complexity increases the likelihood that people will lose flexibility in the face of extensive rules and procedures. Thus, although compliance with detailed operating procedures is critical to achieving safe and reliable performance in many instances (e.g., checklists for pre- and post-procedural briefings, or for reducing infection rates), partly because it creates operating discipline, blind adherence to rules can sometimes reduce the ability to adapt or to react swiftly to surprises. Assuming that invariant operating procedures and routines are the only means through which safe outcomes occur conflates variation and stability and makes it more difficult to understand the mechanism of safe performance under trying conditions. Safety is broader and more far reaching. For a system to remain safe and reliable, it must somehow handle unforeseen situations in ways that forestall unintended consequences . That is, it must organize for transient reliability [17]. This means that it must continuously manage fluctuations in job performance, human interaction, and human-technology interaction, which necessitates capabilities for resilience/containment.

Resilience/containment . A logic of resilience/containment focuses on the ability to absorb strain, bounce back, and cope and recover from challenging or untoward events. It also reflects an ability to learn and grow from previous episodes of resilient action. Capabilities for resilience can be traced to dynamic organizing practices (which themselves should become habits [28] or routines [22]). These organizing practices enhance people’s alertness and awareness to details so that they can detect subtle ways in which contexts vary and call for contingent responding. In other words, resilience works by increasing the quality of attention among the members of a unit, organization, or system as well as increasing flexibility and capabilities to respond in real time, reorganizing resources and actions to maintain functioning despite peripheral failures.

Particular organizing principles and a micro-system of “mindful” organizing practices provide the foundation for beliefs and actions in the safest and most highly reliable organizations. First, highly reliable organizations are preoccupied with failures. Through various practices such as pre- (and post) procedural briefings (see [29]) for example, they conduct proactive and preemptive analyses of possible vulnerabilities, and pay close attention to identifying and understanding what needs to go right, what could go wrong, how it could go wrong, and what has gone wrong, and why. Second, highly reliable organizations avoid simplifying their assumptions about the world. They do this through practices that actively seek divergent viewpoints, seek to question received wisdom, uncover blind spots, and detect changing demands, for example through interdisciplinary rounding, purposely seeking additional “eyes” for particular actions or procedures, or using exacting communication protocols that highlight what to look out for during transitions [30]. As an aside, it is important to note that we aren’t saying that organizations should not seek to streamline or reengineer unwieldy processes; rather we are highlighting the fact that when people coordinate their actions in order to communicate they tend to simplify their observations and discussions. Thus they miss a lot. To build a more complicated picture of the situations they face, highly reliable organizations try to complicate their understandings. Third, highly reliable organizations are sensitive to what is happening right now, how situations are unfolding. Their goal is to develop and maintain an integrated big picture of the current situation through ongoing attention to real-time information so that they can make a number of small adjustments to forestall the compounding of small problems or failures. They do this, for example, using huddles to preemptively assess current situations so as to identify vulnerabilities such as inadequate information, staff, or resource shortages in order to make adjustments before harm is caused [31]. The three principles discussed above focus on anticipation and prevention. Although highly reliable organizations seek perfection, they know they won’t achieve it and develop skills for resilience, recovery, and containment.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 1, 2017 | Posted by in NURSING | Comments Off on Organizational and Cultural Determinants of Surgical Safety

Full access? Get Clinical Tree

Get Clinical Tree app for offline access