Lessons Learned from Anesthesia Registries About Surgical Safety and Reliability


1999

Publication of To Err is Human by the Institute of Medicine

2005

Medicare Physician Group Practice incentive program launched as a 3-year demonstration project

2006

Physician Quality Reporting System (PQRS) launches, providing incentives to those reporting on quality events

2008

Medicare eliminates hospital payments for care resulting from “never events”

2011

Affordable Care Act modifies PQRS, and calls for transition from incentives to penalties

2013

Value Modifier system phase-in begins; applied to groups of providers

2014

Medicare endorses the first Qualified Clinical Data Registries

2015

PQRS incentives all replaced by penalties; PQRS antibiotic measure retired; anesthesia practice participation in QCDRs begins; first announcement of Merit-based Incentive Payment System to take effect in 2019

2016

Most claims-based reporting mechanisms for anesthesia quality eliminated in favor of registry-based reporting




Table 43.2
PQRS and non-PQRS Measures supported by the National Anesthesia Clinical Outcomes Registry (NACOR)



























































































PQRS measures

Non-PQRS measures

Beta-blockers for cardiac surgery patients

Transfer of care checklist: OR to ICU

Use of a bundle of sterile techniques for central venous catheterization

Prevention of postoperative nausea and vomiting (adult)

Assessment of pain in osteoarthritis patients

Prevention of postoperative nausea and vomiting (pediatric)

Medication reconciliation

Transfer of care checklist: OR to PACU

Pain assessment and follow-up

Composite anesthesia safety

Perioperative temperature management

Perioperative cardiac arrest rate

Tobacco cessation counseling

Perioperative mortality

Pain management in palliative care

PACU reintubation rate

Patient-centered surgical risk communication

Postoperative pain management
 
Central line placement safety
 
Measurement of patient experience
 
Timely administration of antibiotics
 
New perioperative temperature management
 
Aspirin for patients with cardiac stents
 
Use of a surgical safety checklist
 
Smoking abstinence before surgery
 
Perioperative corneal injury
 
Timely extubation after cardiac surgery
 
Stroke after cardiac surgery
 
Renal failure after cardiac surgery
 
Stroke or death after carotid stenting
 
Stroke or death after carotid surgery
 
Mortality after aortic aneurysm stenting
 
Venous thrombosis prophylaxis after total knee replacement
 
Antibiotics prior to tourniquet during total knee replacement
 
Unplanned hospital admission after a surgical procedure
 
Rate of surgical site infection


OR operating room, ICU intensive care unit, PACU postanesthesia care unit




The History of Anesthesia Registries


Anesthesia is a data-rich medical discipline , with a history of systematic capture of vital signs, medications and fluids that goes back to the early days of surgery. Harvey Cushing and E.A Codman famously competed as medical students in 1895 to see who could produce the smoothest anesthetic; this rivalry depended on recording and comparing the details of care [7]. In the 1930s pioneering anesthesiologist Emery Rovenstine recorded each of his cases on a punch card, for tabulation by the precursor of a modern computer [8]. Beecher and Todd in 1954 published a landmark paper on surgical outcomes calling out the risks of anesthesia, based on the aggregation of case records from a consortium of university hospitals [9]. The earliest automated record keeping systems were developed in the 1970s, but the real acceleration of these efforts began in about 1990 with widespread deployment of microprocessors . This coincided with a series of breakthroughs in monitoring, leading to the present day capture in electronic anesthesia records of simultaneous output from more than a dozen different measures of patient status, including heart rate and rhythm, blood pressure, oxygen saturation, temperature, inspired and expired gas concentrations and even cerebral function [10]. A single anesthetic can thus generate thousands of data points per hour of intraoperative care.

Before the tools of the Information Age made the collection and manipulation of big data feasible there were a number of anesthesia data collection projects based on understanding specific populations of patients. The most useful of these was without doubt the ASA Closed Claims Project [11]. This repository was based on the manual abstraction of data by expert anesthesiologists from the medical and legal records of patients who filed malpractice claims following adverse outcomes. The Closed Claims researchers worked behind the scenes with malpractice insurance providers to confidentially review a sample of records from cases which have been resolved in the legal system. The reviewers captured dozens of objective data elements such as the surgical case, the type of anesthesia, the patient age and the outcome of the legal proceedings, and combine this information with a narrative describing the course of the case and the complication. The Closed Claims review began in the mid-1980s and has continued to the present, with more than 10,000 total records in the repository. The project has generated two to five papers a year in the anesthesia literature since 1990, and has provided an excellent and ongoing description of the most serious anesthesia complications, beginning with an overview of morbidity and mortality related to anesthesia (dominated in the 1980s and 1990s by failed airway management) [12]. Recent topics have included unintended awareness under anesthesia [13], injuries in the course of chronic pain management [14] and malpractice related to acute hemorrhage [15]. While not quantitative—Closed Claims reports cannot provide the true incidence of complications because the denominator is not usually known—these papers provide guidance for how to change and evolve present practice and what are key risk areas in present practices. The Closed Claims reports have been highly influential in changing the practice of care in these areas.

The Closed Claims Project is limited by the expense involved in expert review of charts, by the inability to measure the risk of the complications seen (because the denominator information—the number of patients at risk—is unknown), and by the time lag between the occurrence of the adverse event and the complete resolution of the malpractice case. This last limitation means that Closed Claims information lags current clinical practice by 3–5 years. The Anesthesia Quality Institute (AQI) launched the Anesthesia Incident Reporting System (AIRS) in 2011 to address these limitations. The AIRS enables any anesthesia provider, anywhere in the world, to submit confidential case reports regarding unsafe conditions, near misses or anesthetic complications [16]. Each case report captures similar objective information to the Closed Claims reports, as well as a narrative from the provider themselves. While AIRS reports are more variable in quality than those generated by the small pool of closed claims experts, they benefit from much greater proximity of the reporter to the actual event. The AQI AIRS Steering Committee actively reviews all collected reports. Emerging trends in patient safety are examined—e.g., complications related to robotic surgery—and exemplary cases are “fictionalized” and then presented as teaching exercises in the ASA Monitor, for the education of the specialty. More information regarding AIRS, including the library of published case reports, can be found at https://​www.​aqihq.​org/​airsIntro.​aspx.


Wake Up Safe


A similar, but more focused, effort was launched in about 2000 by the Society for Pediatric Anesthesia (SPA) . Wake Up Safe (WUS) is a registry of case reports from adverse events occurring during pediatric anesthesia [17]. Participating institutions commit to recording each event from a mutually agreed list of serious complications, using a standardized data capture form which draws heavily on objective information from the medical record. Forms are then sent to a central clearinghouse for entry into the registry, analyzed by a SPA workgroup, and translation into public knowledge through informal and formal academic channels. Each institution also provides the registry with background information on the numbers and types of pediatric anesthesia performed, enabling estimation of risk rates for common complications. For the represented demographic segment—children having major surgery in specialty hospitals—WUS is an important source of information on the safety of pediatric anesthesia [18].


Pediatric Regional Anesthesia Network (PRAN)


The Pediatric Regional Anesthesia Network (PRAN) , captures data on all regional anesthesia cases completed in 22 participating facilities [19]. A standard case report form is filled out for every case, usually by the anesthesiologist. The registry is maintained by the Colorado Children’s Hospital, in collaboration with the University of Washington. This registry now includes more than 110,000 cases, and has been used for a number of descriptive papers and comparative effectiveness studies in the subspecialty of pediatric anesthesia [20].


The MPOG Registry


The Multicenter Perioperative Outcomes Group (MPOG) is a consortium of anesthesia departments working to aggregate clinical anesthesia data for research and quality improvement [21]. Each participating institution uses an Anesthesia Information Management System (AIMS) to digitally capture electronic anesthesia records. Idiosyncratic local data are translated into a common registry format that permits uniform aggregation of records from multiple information technology (IT) platforms. While setting up and maintaining the IT mapping can be a challenge, the end result is the ability to automatically transfer information on every case to the registry, without the need for additional human abstraction but maintaining common definitions of important variables. MPOG began as a collaboration of academics but has recently received funding to promote anesthesia quality improvement in the state of Michigan, which it has used to begin data collection from community hospitals. To facilitate regulatory reporting for participants, MPOG has created a QCDR based on measures of intraoperative anesthesia process which can be automatically calculated from the registry data. Table 43.3 shows the publication dates and topics of scholarly papers based on MPOG data.


Table 43.3
Papers published using data from the Multicenter Perioperative Outcomes Group (MPOG)

















• Bender SP, Paganelli WC, Gerety LP, Tharp WG, Shanks AM, Housey M, Blank RS, Colquhoun DA, Fernandez-Bustamente A, Jameson LC, Kheterpal S. Intraoperative lung-protection ventilation trends and practice patterns: a report from the multicenter perioperative outcomes group. Anesth Analg. 2015

• Kheterpal S, Healy D, Aziz M, Shanks A, Freundlich RE, Linton F, Martin LD, Linton J, Epps JL, Fernandez-Bustamante A, Jameson LC, Tremper T, Tremper KK. Incidence, predictors, and outcomes of difficult mask ventilation combined with difficult laryngoscopy: a report from the Multicenter Perioperative Outcomes Group. Anesthesiology. 2013

• Bateman BT, Mhyre JM, Ehrenfeld J, Kheterpal S, Abbey KR, Argalious M, Berman MF, Jacques PS, Levy W, Loeb RG, Paganelli W, Smith KW, Wethington KL, Wax D, Pace NL, Tremper KK, Sandberg WS. The risk and outcomes of epidural hematomas after perioperative and obstetric epidural catheterization: a report from the Multicenter Perioperative Outcomes Group research consortium. Anesth Analg. 2012

• Freundlich E, Kheterpal S. Perioperative effectiveness of research using large databases. Best Pract Res Clin Anaesthesiol. 2011

• Kheterpal S. Clinical research using an information system: the multicenter perioperative outcomes group. Anesthesiol Clin. 2011

• Aziz MF, Healy D, Kheterpal S, Fu RF, Dillman D, Brambrink AM. Routine clinical practice effectiveness of the Glidescope in difficult airway management: an analysis of 2004 Glidescope intubation, complications, and failures from two institutions. Anesthesiology. 2011


The National Anesthesia Clinical Outcomes Registry (NACOR)


The Anesthesia Quality Institute (AQI) was created by action of the ASA House of Delegates in 2008, to “become the primary source for quality improvement in the clinical practice of anesthesiology.” The specific mission of the AQI was to create and maintain a registry of anesthesia cases and outcomes, using modern information technology [22]. NACOR was announced in 2009, with the early participation of six pioneering anesthesia practices, and case data collection began on January 1, 2010. Since that time, growth and penetration of NACOR has been rapid (Fig. 43.1). NACOR was created on a model of automated harvest of existing electronic records. The easiest of these to obtain—and the starting point for any participating practice—are the group’s “administrative data,” or billing records. Far from being too simple to be useful, anesthesia billing records include about 20 consistently defined data points for every anesthetic. These data provide an important source of truth about the demographics of the practice, and anesthesia nationally. This layer of information in NACOR provides a backdrop for subsequent assessment of outcomes—gathered by about 25 % of participating practices—by providing the denominator needed for calculation of risk and occurrence rates. Definitions of administrative data elements are generally quite uniform, although gathered through dozens of different billing companies each with its own proprietary software system. Fortunately, the needs of the end-users of this data—Medicare and private insurance companies—force consistency in defining otherwise complex elements such as surgical case type, facility type, and mode of anesthesia.

A332506_1_En_43_Fig1_HTML.gif


Fig. 43.1
Growth of the National Anesthesia Clinical Outcomes Registry (NACOR) from 2010 to 2014. Top panel = growth in cases in the registry; Bottom panel = growth in number of participating practices

Once an automated reporting routine has been created to harvest a group’s administrative data, the quest for outcome information begins. More than half of all practices in the USA have mechanisms in place to digitally record the short-term outcomes of each case, and case-by-case reports can be automatically transmitted to NACOR on a regular basis [22]. Many of these systems are targeted directly at the data needed for PQRS reporting (e.g., the time of antibiotic administration) but many admirably exceed this baseline by capturing the occurrence of anesthetic complications such as postoperative pain, nausea and vomiting, corneal abrasion, or serious safety issues such as intraoperative cardiac arrest, pneumothorax after central line placement, major medication error, and anaphylaxis. Anesthesia quality capture systems are generally limited to the time of direct contact with the patient, from preoperative assessment through PACU discharge. This feature necessarily limits the outcomes which can be transmitted to NACOR to those which are readily observed in this time frame: data on intraoperative cardiac arrest are likely complete and accurate, whereas capture of myocardial infarction—typically diagnosed 3–5 days postoperatively—is not realistic. A second limitation is that events are self-reported and thus unverified. While the clinician involved is obviously the best situated to record a complication, doing so requires time and energy. Further, as pay for performance systems advance there may be significant financial incentives to avoid reporting serious adverse events due to fear of loess of income and professional prestige [23]. In practice, the accuracy of self-reported outcomes varies with the culture of safety in the group, and these data must be taken with a grain of salt by users of registry data [24].

A third level of participation in NACOR is achieved by the groups able to transmit clinical information from their AIMS. (Fig. 43.2 shows the relative quantities of data available at each level.) Electronic anesthesia records are used in 30–50 % of cases nationwide, supported by a dozen different software platforms. These vary from modules of enterprise-wide electronic health care records (EHRs) such as Epic and Cerner to stand-alone products designed by anesthesiologists themselves. Larger facilities tend to follow the first model, but with a steady rise in outpatient anesthesia there are now cloud-based stand-alone AIMS customized specifically for use in offices, surgery centers and other remote locations [25]. As anesthesia practice groups become larger, many find themselves working with different software in different locations, making aggregation of case information for practice-wide quality improvement a significant challenge.

A332506_1_En_43_Fig2_HTML.gif


Fig. 43.2
Quantities of data of different types in NACOR, as of April 1, 2015

Although only 6 years old, NACOR has already inspired a number of investigators studying both narrow and broad topics in American anesthesia. Table 43.4 shows a sample of publications based on NACOR data.


Table 43.4
Papers published using data from the National Anesthesia Clinical Outcomes Registry (NACOR)







• Whitlock EL, Feiner JR, Chen LL. Perioperative mortality, 2010 to 2014: a retrospective cohort study using the National Anesthesia Clinical Outcomes Registry. Anesthesiology. 2015

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 1, 2017 | Posted by in NURSING | Comments Off on Lessons Learned from Anesthesia Registries About Surgical Safety and Reliability

Full access? Get Clinical Tree

Get Clinical Tree app for offline access