IDENTIFYING THE PROBLEM
For this chapter, a policy problem is taken to be one that receives government’s attention and leads to responses like advocacy, reform, funding, or regulation.2 In the 1990s, medical error and patient safety became a policy problem for many reasons. It was generally recognised that medical error was a threat to health, especially with the repeated use of the estimate of 18,000 deaths a year caused by adverse events (see McL Wilson et al. 1995). There were efforts in healthcare to improve safety and quality, such as hospital units seeking to improve quality and safety across wards and clinics, and lead clinicians seeking to improve practices in some specialties, which has long been the case for anaesthetics (Runciman & Moller 2001 p 2). Consumerism in healthcare meant that patients were willing and able to demand high standards, and criticise authorities when these demands were not met. Government became more willing to make sure its funding delivered good results, which included dealing with problems such as medical error.
However, the government would have been reluctant to intervene if the problem was defined only in clinical terms. Before the 1990s, medical error had been largely dealt with in clinical, professional and legal ways. For example, professional boards could restrict, suspend or bar clinicians from practice if found to be guilty of, say, ‘infamous conduct in any professional respect’. A patient could sue a doctor if it were proved the patient was harmed by negligence. There may have been reason for government to respond, but the problem needed to be made relevant to government influence.
Another difficulty for dealing with medical error and patient safety in policy terms is its complexity. Medical error could be loosely described as producing harmful, unintended results from healthcare. However, as noted by one overview, ‘(a)lmost every study uses different methods, terms, and definitions’ (Thomas & Brennan 2001 p 32). Different terms include adverse events, medical mistakes, complications, medical negligence, and incidents, each with its own shade of meaning. By contrast, terms such as patient safety emphasise the results rather than the process of care. The complexity of the problem is demonstrated in Box 15.1 by way of example.
BOX 15.1 EXAMPLES OF ERROR AND THREATS TO SAFETY
- A patient is mistakenly given the wrist label of another patient admitted at the same time. Fortunately, the mistake was discovered by a nurse after talking with the patient before any treatment was given.
- A patient is prescribed a drug that is tolerated well in most cases, but rarely has severe side-effects. The patient did not know about the risks, and he is angry when he gets sick and finds out what caused his illness.
- A patient has a hip replacement and subsequently a blood transfusion. A review later finds that the transfusion could have been avoided with better care management.
- A patient is prescribed different drugs by different doctors, each unaware of the other’s prescriptions. There is no review of the patient’s medication and, taken together, they cause adverse reactions.
- A patient undergoes successful surgery, but later trips and falls in the hospital corridor, causing bruises and sprains and a longer stay in hospital.
- A patient is prescribed a drug that is tolerated well in most cases, but rarely has severe side-effects. The patient did not know about the risks, and he is angry when he gets sick and finds out what caused his illness.
These examples show what is discussed in the literature: error and safety may be about the process of care, the outcome for the patient, or both. It may be the responsibility of the provider whether by commission or omission; or it may be caused or exacerbated in the system of care, whether clinical or administrative.
Medical error and patient safety became a policy problem in Australia as part of a larger international push to recognise the problem. This section examines how the problem was defined in system, financial, and human terms, and how this framed it for policy attention by government. These definitions were often derived from international sources, especially from the United States and the United Kingdom, and then adapted to the Australian situation.
A system problem
Healthcare policy is now full of terms such as ‘care coordination’, ‘multidisciplinary care’, ‘continuity of care’, and ‘care pathways’. The aim is to draw together different professionals who work together to treat illness and deal with a patient’s medical, psychological, and social needs throughout treatment. Healthcare is increasingly being understood in system terms involving the complex interplay of technology, care practices, and different professions, rather than in terms of bedside or clinical care by professionals working alone.
In a similar way, since the early 1990s, error experts have argued medical error and patient safety should be defined in system rather than individual terms. Aviation and manufacturing are often used as examples to show that error rates can be reduced in the right conditions. Rather than only looking at the ‘hand that wields the scalpel’, planning decisions, system management, and error-provoking conditions such as time pressure, understaffing, workplace morale and conditions, inadequate equipment, fatigue, and inexperience may all contribute to error (Bognor 1994 pp vii–xv, 4, Leape 1994 pp 1852–5, Reason 1994 p 11, Vincent 2001 pp 9–30, Reason 2000 p 769).
This approach, at least in some of its forms, has its critics. For example, Leape suggests that the training of medical practitioners to be individualistic and perfectionist had resisted the systems approach (1994 p 1851). There is also a popular demand to find individuals responsible and accountable, as reflected in statements like ‘the buck stops here’. This demand partly explains the New South Wales Government’s criticism of its health complaints commission’s report on unsafe care at Campbelltown and Camden hospitals. The government was concerned that the commission did not find any one person accountable (see Iemma 2003).
However, the power of defining medical error in system terms has proved resilient, with efforts to convince a wider audience of the experts’ view. For example, in his essay ‘When Doctors Make Mistakes’, Gawande argued against a ‘tidy version of misdeeds and misdoers’ and instead that:
In complex systems, a single failure rarely leads to harm … But errors do not always become apparent, and backup systems themselves often fail as a result of latent errors. A pharmacist forgets to check one of a thousand prescriptions. A machine’s alarm bell malfunctions. The one attending trauma surgeon available gets stuck in the operating room. When things go wrong, it is usually because a series of failures conspires to produce disaster.
Since the second world war, government has become largely responsible for running the healthcare system, especially after it started to fund population-wide access to medicine, pharmaceuticals, hospitals, and primary healthcare. It is government that undertakes system-wide planning, and makes decisions about system resources and infrastructure such as major buildings, logistics, purchasing, administration, and information technology. As system manager, government is expected to deal with system problems such as medical error.
A financial problem
Medical error may bring disability, pain, and suffering. But it also leads to expense, with more treatment and longer stays in hospital unless the patient dies. Given the financial pressure on healthcare, medical error is defined as a financial problem outside and within government. The cost of medical error represents resources that could have been saved or spent elsewhere on healthcare.
Adverse events in hospitals have been variously estimated to cost $1.7 billion (Rigby et al. 1999 p 9) to $2 billion a year (Ehsani et al. 2006 pp 553–4). Both include non-preventable events, which if estimated at around 50% of all events, then perhaps $1 billion a year could be saved or redirected. The Productivity Commission reports an estimate of 1.8% fewer re-admissions if surveyed hospitals matched the performance of the top 20% of reporting hospitals (2007 9.33–9.34).
Placing a financial value on medical error and efforts to prevent it is difficult. Systemic change and reform to prevent error and improve safety do cost in effort and resources. For example, accreditation which might arguably make safety and quality gains, also costs in systems development, preparing documentation, and training and upgrading (Australian Commission on Safety and Quality in Health Care [ACSQHC] 2006 p 17).
However, estimates of health system cost have tended to focus on hospital cost, as this is where medical error has been most analysed, but there are costs to other parts of the system; for example, with the management of ongoing chronic conditions. There are costs due to time and productivity lost from work and disability benefits (Vincent 1999), and there are also costs of compensation in cases of proved negligence, which in turn contributes to pressure on insurance premiums and costs of running health services. Despite these difficulties, the discussion of medical error as a funding problem prevails. For example, a publication by the former Australian Council for Safety and Quality in Health Care notes ‘poor quality is extremely costly and drains the system of precious resources’ (Pearse 2004 p 10).
Government has been a majority funder of healthcare since at least the 1960s, if tax expenditure is included (Butler 1998). As financial pressure on the healthcare system grows, so too does pressure to deal with preventable expense, although the problem is not simply the amount of money lost. From the 1980s, government became increasingly interested in the results of funding, and not just funding amounts and method. For the Commonwealth, this was reflected in the public sector reforms of 1983 and the evaluation strategy which started in 1988 (Mackay 1993). This expanded the scope of government’s financial responsibilities for healthcare: it wanted to be not simply a majority and largely passive funder, but rather also to measure and improve how funding delivered results. This includes reducing the extent and consequences of medical error, and improving patient safety.
A human problem
Inglehart argues that in prosperity, people ask more questions, are sceptical towards authority, evaluate politics with more demanding standards, and deal with politics less by mass participation and more through particular issues (Inglehart 1997). Established institutions like healthcare and government are subject to more questioning, scepticism, and criticism, and they are more frequently called to account.
In its reportage of medical error, the media has drawn upon an increasing willingness to demand higher standards and to criticise if these standards are not met, and in doing so it emphasises the human angle. Personal experience is a powerful symbol and the coverage of innocent suffering and tragedy makes medical error seem a real and an immediate threat. Family members tell of their grief at the death of a relative, spouse, or child; and patients tell of their anger, pain, and suffering. The definition of medical error and patient safety as a human problem, which is then mediated through newspaper articles and television reports, exerts a powerful influence on those in authority to respond.
As well as running and funding healthcare, government is also responsible for taking reasonable measures to protect the population from harm. This is a long held idea about responsible government (see Osborne 1997 pp 176–7). Applications of this idea are seen in 19th century public health measures to enforce quarantine and public health, and are now reflected in bioterrorist measures and prevention of infectious disease. Media accounts of medical error often draw upon this idea and appeal to government to intervene to deal with the human problem of medical error, whether implicitly or explicitly.
POLICY RESPONSES
Definitions of medical error in system, financial and human terms helped take the attention of government. They also helped frame a policy response. On the one hand, government is largely responsible for running and funding the healthcare system, but it also must negotiate with other interests, sometimes cooperatively and other times in conflict (see Sax 1984). On the other hand, responsible government takes reasonable measures to protect the population from harm. This has led to a mix of advocacy, reform, funding, and regulation in response to medical error, which will be examined in this section.
Government advocates
Advocacy is one way that government tries to influence the system of healthcare, and it used this to deal with medical error and improve safety. The Commonwealth started a review of professional indemnity for healthcare professionals in 1991. The review took submissions, funded research, published papers, and in 1995 reported to the Commonwealth health minister. The objective of the review was to minimise the human and financial costs of adverse patient outcomes from healthcare (Review of Professional Indemnity Arrangements for Health Care Professionals [RPIAHCP] 1995). Its work anticipated policy concerns to follow in the next decade, including access to the tort system, with high costs and delays for plaintiffs.
One piece of research funded by the review, the Quality in Australian Health Care Study, estimated adverse events in Australian hospitals. The then Commonwealth health minister released early results from the study in the House of Representatives on 1 June 1995, concluding:
We can no longer be complacent about the issue of patient safety in healthcare. It is no longer satisfactory for us to simply assert that we have a high-quality health system. We need to measure outcomes and performance. We need to ensure that the system has a clear and strong patient focus. We need to actively address the problem areas. I congratulate those who carried out the study – the challenge of response lies before us.
The minister indicated that she had written to state health ministers and would raise the problem at health ministers’ meetings, and consult with professionals. A taskforce on quality in Australian healthcare was established in 1995, and reported in 1996. This was followed by a national expert advisory group on safety and quality in Australian healthcare, which made its final report in 1999.
Since 1998, health ministers have issued five statements on safety and quality matters, announcing the creation of new safety and quality agencies, new funding, and agreements to lead on improving the quality and safety of healthcare. In addition, many other releases touch on safety or quality issues, such as better coordination of Indigenous service delivery, more accurate screening of blood donations, improved quality of services for people with chronic conditions, and improving responsiveness of mental health services (see Australian Health Ministers Advisory Council [AHMAC] 2007).
Government reforms
If a problem is systemic, it implies that systemic responses are required, and government used reform to try for systemic improvement. Bureaucracy was often the start for government. From the 1990s, the larger departments of health used internal units dedicated to safety and quality to develop policy, brief ministers, and start and manage programs.
New organisations with new powers were also created. Health complaints bodies were established to give patients an independent means to complain about their care (Thomas 2002). The Australian Council for Safety and Quality in Health Care started in 2000, mediating between medicine and government to help drive systems and long-term reform. In 2006 the council was replaced by a new commission responsible for improving safety and quality across the whole healthcare system. The Commonwealth created a National Institute of Clinical Studies in 2000, the ‘national agency for closing the gaps between evidence and practice in healthcare’ (NICS 2007). The Centre for Research Excellence in Patient Safety was created to strengthen the evidence for system improvements and to reduce adverse events (CREPS 2007).
Together, these new institutions have created new materials such as information for patients about safety and quality, new processes such as pilots for open disclosure when talking with patients, new data such as reporting on sentinel events to indicate systemic problems, and new research and evidence to improve safety and quality. These agencies sought to make systemic change to the way that healthcare was practised, especially by professionals working with their peers to promote change.
Government funds
Government responded to medical error by funding new programs and projects, research, and safety and quality organisations. The Commonwealth, states and territories all fund safety and quality activities, such as the work program of the former council and the current Commission on Safety and Quality in Health Care. Government funds these measures to improve the results of its existing funding of healthcare and to make system reform.
Safety and quality funding is also transferred between governments, as part of the larger financial transfer from the Commonwealth to the states and territories. In particular, the Commonwealth helps pay the costs of public hospitals through 5-year agreements. Although some states had created health complaints bodies beforehand, the 1993–98 agreements required all states and territories to do so, and make sure they operated independently of health departments and hospitals (Council on the Ageing [COTA] 1993). In the following 1998–2003 agreements, the Commonwealth made around $660 million out of $31 billion over 5 years available for quality activities (Maskell-Knight 1999).
It is said that ‘he who pays the piper calls the tune’, and with funding comes influence. However, balanced against this is the sheer complexity of the healthcare system, involving millions of clinical care events provided each year by a mix of public and private providers working across different settings. Accordingly, government to date has used funding to support good clinical practice, encourage good governance, and create the right environment for safety and quality.
Government regulates
The increasing complexity, diversity and power of therapeutic goods, and their supply, also means increasing risk. In response, government has strengthened its regulation of the safety and quality of these products, consistent with a growth of consumer protection measures taken in many Western countries, such as the United States (see Moss 2002).
Legislation, licensing, policy directives, approvals, inspections, and guidelines are all ways used to regulate safety and quality in healthcare. For example, a state might require all public hospitals to be accredited, have specialist committees to promote safe clinical practice, such as the use of blood and blood products, and to report any sentinel adverse events for patients.
Nationally, new healthcare regulatory agencies have been created. The Therapeutic Goods Administration (TGA) regulates the safety and quality of medicines and medical devices, such as pharmaceuticals, complementary medicines and blood products. Other regulatory agencies include Food Standards Australia New Zealand which sets and regulates food standards; the National Industrial Chemicals Notification and Assessment Scheme which regulates industrial chemicals; and the Office of the Gene Technology Regulator (see Department of Health and Aged Care [DHAC] 2001).
Regulation is a direct intervention by government, and enables greater control for government to protect the population from harm. However, it also brings with it risk, as insufficient knowledge, incapacity, or plain error may cause unintended effects of regulation, or too costly a result. Failure to protect consumers from defective products may also lead to accusations that government has not lived up to its responsibilities (see Drache & Sullivan 1999 p 8).
It is now useful to turn to a specific case of how the definition of medical error in system, financial and human terms helps frame the policy response. Box 15.2 sets out the case of safety in Queensland’s public hospital system.
BOX 15.2 SAFETY IN QUEENSLAND’S PUBLIC HOSPITAL SYSTEM
From 2004 to 2006, Queensland’s public hospital system was in tumult, and the main trigger was public disquiet over patient safety, especially by the reported practice of a one-time director of surgery at the Bundaberg Base Hospital, Dr Jayant Patel. The problem of patient safety and healthcare came to dominate health politics in Queensland, with widespread media coverage and high-profile government inquiries (Kennedy 2007, Thomas 2007). This section considers the system, financial and human terms in which patient safety was defined, and how this framed the policy response by the Queensland Government.
At first, the government denied there was any problem. For example, in 2004, the then local government’s member pointed instead to increases in the hospital’s budget, capital redevelopment, and new services, claiming that ‘… patients in Bundaberg can be confident that they will receive top-quality treatment at the Bundaberg Base Hospital’ (Cunningham 2004). However, a string of articles in The Courier Mail in March and April 2005 reported the failure by the Queensland Medical Board to adequately check the clinical skills of overseas-trained doctors in areas of need, a culture of intimidation and bullying in the Queensland Department of Health (Queensland Health), and a failure to take complaints and concerns seriously.
In late April 2006, the Queensland Premier announced two inquiries. The first (the Davies Inquiry)3 was to investigate matters arising from the appointment of Dr Patel, but its scope was broadened to include systemic matters such as whether improvements could be made to the Medical Board of Queensland and what could be done on a state-wide basis to ensure complaints and allegations are dealt with. The second (the Forster Inquiry) reported on a review of Queensland’s health systems and how they could be improved.
The reports recommended systemic findings and options. The Davies report noted four factors that contributed to the path of injury and death at Bundaberg: the hospital budget; the failure to check Dr Patel’s background; the failure to have Dr Patel credentialed and privileged; and the failure to have any adequate complaints system operating. The report also discussed problems in aspects of care or management at Hervey Bay, Townsville, Charters Towers, Rockhampton, and Prince Charles hospitals, reinforcing the systemic nature of the problem (Davies 2005 pp 1–17). The Forster Review made 141 recommendations about Queensland Health’s administrative, workforce and performance management systems, with a comprehensive reform program over 3 years (Forster 2005).
Once the reports were released, the Queensland Government also responded in systemic terms, developing what it called a health action plan to reform the Queensland health system. This included pay increases for staff, recruiting more staff, more training, investment in information systems, restructuring of Queensland Health with more staff assigned to districts and areas, and more open performance information (Beattie 2006).
A financial problem and response
The Davies report noted how financial matters influenced safety problems. Firstly, a constrained budget meant the hospital was pressed to employ an overseas-trained surgeon who cost less than one fully recognised by Australian authorities. Secondly, hospitals were funded on a historical basis with extra funding for more surgical throughput. This created incentives for activity rather than quality, and the more surgery performed, the more funding the hospital received (see Davies 2005 p 6). The Forster Review also made a strong case for extra funding to flow into the Queensland system, noting its comparative under-funding compared to other states, and raised the possibility of patient co-payments for hospital services, an option the Queensland Government later rejected.
Instead, the Queensland Government responded in October 2005 with a mini-budget, promising $6.367 billion over 5 years ($4.431 billion new money) towards the Queensland health system. In the 2006–07 Budget, this amount was increased to $9.7 billion over 5 years and linked to its health action plan (Robertson 2006).
In April 2005, an article in The Courier Mail referred to the problem as a ‘firestorm raging out of control’ and that the ‘letters and editorial pages of newspapers from Tweed Heads to Townsville reflected the anger of constituents and editors’ and that ‘talkback radio had been lit up by fiery callers furious at the incompetence of a health system, which for 2 years protected a dangerous overseas-trained surgeon …’ (Thomas 2005).
Whether or not the article was accurate, the Premier took a highly visible role in leading the response to the problem. It was the Premier who announced the two inquiries and the health action plan, making many media statements on the issue and treating the plan as a major part of his bid for Labor to be returned to government in Queensland. The Premier also sought to respond to the human aspects of the case, noting in the same article above that he felt embarrassed and mortified about what had happened.
The Queensland Government made efforts to show that it was responding to concerns of patients, after public discussion of the human aspects of the problem. This was acknowledged in letters and community updates to patients of Bundaberg – one update noting ‘events of the past fortnight may have shaken your confidence in local health services’ and setting out ways Queensland Health was trying to improve care for patients (Daly 2005). Compensation has also been offered to patients and families harmed as a result of care at Bundaberg.
By dealing with the problem in systemic, financial, and human terms, the Queensland Government tried to show it comprehensively responded to safety and quality concerns in its health system. However, it has also raised expectations about what it can deliver. There have already been criticisms about whether the reforms are being put in place adequately, and this type of pressure may be expected to continue (see Viellaris 2007).