Trustworthy Systems for Safe and Private Healthcare


Trustworthy Systems for Safe and Private Healthcare

Dixie B. Baker


For over a decade, the healthcare industry has been undergoing a dramatic transformation that is affecting every aspect of healthcare, from how, to whom, and by whom care is delivered, to how conditions are diagnosed and treated, how biomedical science advances, and how information is collected, protected, and leveraged in both individual and population health. Disruptive changes brought about through legal mandates aimed at lowering the cost and improving the quality of care, advances in information technology, and a more collaborative, engaged consumer population have resulted in an increased focus on team-based care, value-based reimbursement, and an increasing reliance on health information technology (HIT) across the continuum of care.

As noted by former National Coordinator David Blumenthal, MD, MPP, “Information is the lifeblood of modern medicine” and HIT is its circulatory system, without which neither individual healthcare professionals nor healthcare institutions can perform at their best or deliver the highest-quality care (Blumenthal, 2009). To carry Dr. Blumenthal’s analogy one step further, at the heart of modern medicine lies “trust.” For the past 20 years, Gallup poll respondents have ranked nurses the “most trusted profession[al],” based on their honesty and ethics (Stone, 2019). To maintain their trustworthiness, nurses must be able to trust that the critical information resources they need will be accurate and available when needed, while preserving individual privacy rights. They must trust that the information in a patient’s electronic health record (EHR) is accurate and complete and that it has not been accidentally or intentionally corrupted, modified, or destroyed. Consumers must trust that their caregivers will keep their most private health information confidential and will disclose and use it only to the extent necessary and in ways that are legal, ethical, and authorized, consistent with the individual’s personal expectations and preferences. Above all else, consumers must trust that their caregivers and the technology they use will “do no harm.”

The nursing field is firmly grounded in a tradition of ethics, patient advocacy, care quality, and human safety. The registered nurse has a responsibility to be well indoctrinated on clinical practice that respects personal privacy and that protects confidential information and life-critical information services. The American Nurses Association’s (ANA’s) Code of Ethics for Nurses with Interpretive Statements, the foundation of the nursing profession includes a commitment to “promote, advocate for, and protect the rights, health, and safety of the patient” (ANA, 2015). The International Council of Nurses (ICN) Code of Ethics for Nurses affirms that the nurse “holds in confidence personal information and uses judgment in sharing this information” (ICN, 2012). Fulfilling these ethical obligations is the individual responsibility of each nurse, who must trust that the information technology she relies upon will help and not harm patients and will protect their private information.

Recording, storing, using, and exchanging information electronically do indeed introduce risks. As anyone who uses e-mail, texting, or social media knows, very little effort is required to instantaneously make personal information accessible to millions of people (trusted and otherwise) and artificially intelligent technology throughout the world. We also know that nefarious people and their software agents skulk around the Internet and insert themselves into our Web sites, laptops, tablets, and smartphones, eager to capture our passwords, identities, contact lists, and credit card numbers. At the same time, the capability to receive laboratory results within seconds after a test is performed; to virtually work with a patient’s entire care team to continuously monitor conditions wherever the patient and care team members may be; to align treatments with proven, outcomes-based, automated protocols and decision-support software; and to practice personalized medicine tailored to the patient’s condition, family history, and genetics, all are enabled through HIT.

Regulatory Foundation

The foundational law governing healthcare privacy and security within the United States is the Health Insurance Portability and Accountability Act (HIPAA) of 1996 (HIPAA, 1996), and its privacy and security rules (CFR, 2013). The Security Rule requires compliance with a set of administrative, physical, and technical standards, and the Privacy Rule (CFR, 2013) sets forth privacy policies and practices to be implemented. HIPAA regulations establish uniform minimum privacy and security standards. However, HIPAA establishes that any applicable state law that is more stringent than the HIPAA regulations will take precedence over the HIPAA rules. All U.S. states and territories have enacted laws requiring notification when personal information is breached, and 18 states have enacted laws protecting personal information more broadly than HIPAA and other federal regulations (Holzman & Nye, 2019).

For any healthcare organizations that provide inperson or virtual services to European residents, the European General Data Protection Regulation (GDPR) may also apply (GDPR, 2019). Because the HIPAA regulations apply only to “covered entities” and their “business associates” and not to everyone who may hold health information, and because more stringent state laws may take precedence over HIPAA, and because some healthcare organizations provide services to Europeans, the privacy protections of individuals and security protections of health information will vary depending on who is holding the information, where the consumer resides, and the location from which services are provided. It is the responsibility of every nursing professional to be informed about and to act in compliance with the laws and regulations that apply within the jurisdiction within which care is provided.

The U.S. Health Information Technology for Economic and Clinical Health (HITECH) Act that was enacted in 2009 as part of the American Recovery and Reinvestment Act (USC, 2009) provided major structural changes, funding, and financial incentives designed to significantly expedite and accelerate a transformation of U.S. healthcare to improve efficiency, care quality, and patient safety, and to reduce cost through the digitization of health information. The HITECH Act codified the Office of National Coordinator (ONC) for health information technology (HIT) and assigned it responsibility for developing a nationwide infrastructure that would facilitate the use and exchange of electronic health information, including policy, standards, implementation specifications, and certification criteria. In enacting the HITECH Act, Congress recognized that the meaningful use and exchange of electronic health information was key to improving the quality, safety, and efficiency of the U.S. healthcare system.

At the same time, the HITECH Act recognized that as more health information was recorded and exchanged electronically to coordinate care, monitor quality, measure outcomes, and report public health threats, the risks to personal privacy and patient safety would be heightened. This recognition is reflected in the fact that four of the eight areas the HITECH Act identified as priorities for the ONC specifically address risks to individual privacy and information security:

1.   Technologies that protect the privacy of health information and promote security in a qualified EHR, including for the segmentation and protection from disclosure of specific and sensitive individually identifiable health information, with the goal of minimizing the reluctance of patients to seek care (or disclose information about a condition) because of privacy concerns, in accordance with applicable law, and for the use and disclosure of limited data sets of such information

2.   A nationwide HIT infrastructure that allows for the electronic use and accurate exchange of health information

3.   Technologies that as a part of a qualified EHR allow for an accounting of disclosures made by a covered entity (as defined by the HIPAA of 1996) for purposes of treatment, payment, and healthcare operations

4.   Technologies that allow individually identifiable health information to be rendered unusable, unreadable, or indecipherable to unauthorized individuals when such information is transmitted in the nationwide health information network or physically transported outside the secured, physical perimeter of a healthcare provider, health plan, or healthcare clearinghouse

The HITECH Act resulted in the most significant amendments to the HIPAA Security and Privacy Rules since the rules became law. These included a Breach Notification Rule (CFR, 2009), guidance for rendering protected health information unusable, unreadable, or undecipherable to unauthorized individuals (HHS, 2013), and creation of the “wall of shame” report of major breaches of protected health information (HHS, 2019).

The 21st Century Cures Act, enacted in late 2016, clearly established healthcare innovations, improved efficiencies, and patient safety as high priorities for the 21st century. The law’s stated mission was to “accelerate the discovery, development, and delivery of 21st century cures, and for other purposes” (USC, 2016). Among those “other purposes” were several provisions specifically addressing individual privacy, security protection, and assurance that protective measures do not put patients’ health and well-being at risk.

First, whereas HIPAA focuses on “protected health information” (i.e., a defined subset of identifiable health information exchanged for treatment, payment, and healthcare operations), the 21st Century Cures Law addresses identifiable, sensitive information, defined as:

“information that is about an individual and that is gathered or used during the course of research and:

(A)  through which an individual is identified; or

(B)  for which there is at least a very small risk, as determined by current scientific practices or statistical methods, that some combination of the information, a request for the information, and other available data sources could be used to deduce the identity of an individual.”

This definition embodies the flexibility required to allow interpretation to evolve as “available data sources” continue to expand and as the power of information analytics continues to intensify.

Second, in proposing measures to enable new, beneficial treatment options to be available to patients more quickly, the law included provisions for expediting clinical research while protecting the safety and privacy of research participants. Specifically, the law allowed for waivers of informed consent for clinical testing that is judged to pose no more than minimal risk to the individual subjects involved and that includes appropriate safeguards to protect the individual rights, safety, and welfare of the participants. As the principal research oversight body, Institutional Review Boards (IRBs) hold the decision power over whether the risks associated with a research protocol are “minimal.” To improve the efficiency of the IRB review process, the law called for the elimination of duplicative regulations and activities, and enabled the sharing of IRBs across regulatory agencies.

Third, the law acknowledged confusion within the healthcare community regarding the ways in which the HIPAA Privacy Rule allows sharing of information pertaining to patients with mental illnesses and substance abuse disorders. Fearing that this confusion could hinder the communication of information essential for the health and safety of individuals with mental illnesses, the law called for regularity clarification, which the U.S. HHS Office of Civil Rights issued in 2017 (HHS, 2017).

Security, Privacy, and Trust

Since the HITECH Act was enacted in 2009, HIT has assumed an increasingly important role in the provision of care and in healthcare decision-making, particularly for the hospitals and healthcare providers who are now required to use certified EHR technology—that is, technology that has been independently certified against national standards of clinical functionality and security protection. As of 2017, nearly 80% of office-based physicians, 99% of large hospitals, and 97% of medium-sized hospitals were using certified HIT (ONC, 2019). It is worth noting that many specialists whose practices lie outside the purview of HITECH are capturing and storing patient information in uncertified electronic records systems or paper files in hanging folders, and are still relying on fax machines for information exchange.

Consumers remain a bit cautious about HIT. More than 90% of providers have implemented portals that enable consumers to view their own health data, but fewer than 33% of consumers are using those portals (Heath, 2018). Data collected by the U.S. Office of the National Coordinator for HIT (ONC) revealed that, although the majority of consumers (74%) expressed confidence that their medical records were safe from unauthorized viewing, many consumers (66%) reported concern regarding electronic exchange of their health information. Ten percent of individuals reported that they had gone so far as to withhold information from their healthcare providers due to privacy and security concerns (ONC, 2019).

Today’s nurses practicing in these “wired” environments depend upon HIT to provide instantaneous access to accurate and complete health information and validated decision support, with assurance that sensitive data and individual privacy are appropriately protected. Legal obligations, ethical standards, and consumer expectations drive requirements for technological and operational assurances that data and software applications will be available when they are needed; that private and confidential information will be protected; that data will not be modified or destroyed other than as authorized; that technology will be responsive and usable; and that systems designed to perform health-critical functions will do so safely. These are the attributes of trustworthy HIT—technology that is worthy of our trust. The Markle Foundation’s Connecting for Health collaboration identified privacy and security as a technology principle fundamental to trust: “All health information exchange, including in support of the delivery of care and the conduct of research and public health reporting, must be conducted in an environment of trust, based on conformance with appropriate requirements for patient privacy, security, confidentiality, integrity, audit, and informed consent” (Markle, 2006).

Many people think of “security” and “privacy” as synonymous. Indeed, these concepts are related in that security mechanisms can help protect personal privacy by assuring that confidential personal information is accessible only by authorized individuals and entities. However, privacy is more than security, and security is more than privacy. Healthcare privacy principles were first articulated in 1973 in a U.S. Department of Health, Education, and Welfare report entitled Records, Computers, and the Rights of Citizens as “fair information practice principles” (DHEW, 1973). The Markle Foundation’s Connecting for Health collaboration updated these principles to incorporate new risks created by a networked environment in which health information routinely is electronically captured, used, and exchanged (Markle, 2006). Based on these works, as well as other national and international privacy and security principles focusing on individually identifiable information in an electronic environment (including but not limited to health), the ONC developed a Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information that identified eight principles intended to guide the actions of all people and entities that participate in networked, electronic exchange of individually identifiable health information (ONC, 2008). These principles essentially articulate the “rights” of individuals to openness, transparency, fairness, and choice in the collection and use of their health information.

Whereas privacy deals with the rights of individuals to control access to and use of information relating to themselves, security deals with the protection of valued information assets. Security mechanisms and assurance methods are used to protect the confidentiality, authenticity, integrity, and availability of information and data, including the capture of an accurate record of all accesses to and use of information. While these mechanisms and methods are critical to protecting personal privacy, they are also essential in protecting patient safety, care quality, and institutional integrity— and in engendering trust. For example, if laboratory results are corrupted during transmission, if data in an EHR are overwritten, or if a fraudulent electronic message is received, the nurse is likely to lose confidence that the HIT can be trusted to help provide quality care. If an adversary alters a clinical decision-support rule or launches a denial-of-service attack on a sensor system for tracking wandering Alzheimer’s patients, individuals’ lives are put at risk!

Trustworthiness is an attribute of each system component, each software application, and of integrated enterprise systems as a whole—including those components that may exist in “clouds” or in coat pockets. Trustworthiness is extremely difficult to retrofit, as it must be designed and integrated into the system and conscientiously preserved as the system evolves. Discovering that an operational system cannot be trusted usually indicates that extensive—and expensive—changes to the system and its operational environment are needed. In this chapter, we introduce a framework for achieving and maintaining trustworthiness in HIT.


Although we would like to be able to assume that our computing technology, networks, clouds, smartphones, and software applications are trustworthy, unfortunately that may not be the case. When computer systems, network infrastructure, and software applications fail to protect personal information and safety-critical data, or are unavailable to provide critical services when needed, our trust is undermined, and personal privacy and safety are imperiled.

Breaches of Personal Information

Since the 2009 HITECH regulation, entities covered under HIPAA have been required to report to the HHS breaches affecting 500 or more individuals. HHS maintains a public Web site, frequently referred to as the “wall of shame,” listing the breaches reported. In 2018 (the year before this chapter was written), a total of 353 breaches affecting 13,025,814 individuals were reported—more than twice the number of records exposed in 2017. The 10 largest healthcare breaches in 2018 accounted for nearly 68% of the total number of reported records exposed in that year (HHS, 2019).

Perhaps even more significant than the numbers is the dramatic shift in causes to which these breaches are attributed. In the years between 2009 and 2017, fewer than half of all reported breaches were attributed to “hacking/IT” and “unauthorized access” combined. For 2018, 86% of the breaches reported were attributed to these causes. Clearly, as more healthcare information is becoming available electronically, its perceived value as a target for intruders and insiders is increasing.

An attack on the University of Connecticut (UCONN) and UCONN Health offers a dramatic example of the potential effects of a data breach. In February 2019, UCONN Health announced that an adversary had gained access to several employees’ e-mail accounts through a phishing attack that lured employees into clicking on an e-mail link that appeared to have come from a trusted source, but actually was malware. Sensitive, personal information from 326,000 patients was compromised, including patient names, birth dates, addresses, medical information, and Social Security numbers. A forensic evaluation revealed that the UCONN systems were compromised in August, 2018, but the breach was not discovered until December 24, 2018; patients received notification two months later (Davis, 2019). One of those patients who received a breach notification discovered a fraudulent transaction on her bank account and ascribed it to the UCONN breach. She subsequently filed a classaction lawsuit against UCONN, alleging that, in addition to the fraudulent banking activity detected, the breach increased the plaintiffs’ risk of financial fraud and identity theft for years to come. Note that implementation of an effective intrusion detection system (IDS), as described below, could have detected this threat much earlier.

Denial of System Services

Service interruptions may be attributed to a number of factors, including both acts of nature and malicious human activity. Security technologies and operational practices help organizations detect potential threats to critical services, manage emergencies, and recover from service interruptions and outages.

Distributed denial-of-service (DDoS) can be particularly difficult to detect and contain. In the Spring of 2014, a hacker collective known as “Anonymous” launched a DDoS attack against a mental-health clinic and Boston-area children’s hospitals. The attacks were part of a “campaign” called #OpJustina aimed at bringing public attention to the case of a young girl who was separated from her parents following a misdiagnosis made by Boston Children’s Hospital medical staff. Specifically, the parents had brought their daughter to the hospital for treatment for a digestive problem, claiming she had been diagnosed as having a mitochondrial disorder. The doctors at Boston Children’s Hospital concluded that the child’s symptoms were related to a psychiatric condition and that she possibly had been abused by her parents. The hospital filed abuse charges against the parents, and the girl was sent to a mental-health treatment facility.

In retaliation, the Anonymous group launched an attack targeting the facilities’ Web sites and networks, installing malware in over 40,000 network routers. Boston Children’s Hospital and several other hospitals in the Boston area were rendered unavailable. Later in the year, at a HIMSS Privacy and Security Forum, Daniel Nigrin, Boston Children’s Hospital’s Senior Vice President and CIO, presented a detailed account of the attack and lessons learned, among which were the importance of implementing DDoS countermeasures, having an inventory of critical data and services, and having an alternative to e-mail for external communications, such as Short Message Service (SMS) messaging (Ouellette, 2014). In August 2018, a federal jury convicted Martin Gottesfeld of carrying out the attacks (Cimpanu, 2019).

Unsafe Medical Devices

In a 2018 report relating to vulnerabilities in medical devices being approved by the FDA, the Office of the Inspector General (OIG) wrote:

Cybersecurity is an area of increasing risk to patients and the health care industry as more medical devices use wireless, Internet, and network connectivity. Researchers have shown that networked medical devices cleared or approved by FDA can be susceptible to cybersecurity threats, such as ransomware and unauthorized remote access, if the devices lack adequate security controls. These networked medical devices include hospital-room infusion pumps, diagnostic imaging equipment, and pacemakers (Murrin, 2018).

Just months after this report was published, the U.S. Department of Homeland Security, which oversees the security of critical infrastructure, including medical devices, alerted consumers of serious cybersecurity vulnerabilities in 16 different models of implantable defibrillators manufactured by Medtronic and sold throughout the world. The vulnerabilities would enable a sophisticated attacker to harm patients by altering the software programming in the implanted device. As many as 750,000 individuals were at risk of having the programming in their implanted devices altered or erased by a malicious attacker. The vulnerabilities also affected bedside monitors that read data from the devices in patients’ homes and clinical programming devices. The vulnerabilities were discovered by two teams of security researchers and reported to Medtronic, which investigated the issue and then reported it to authorities (Carlson, 2019).

Total Havok!

Perhaps the finest example of what happens when an organization eschews basic security practices, dismisses security warnings, ignores critical software updates, and fails to anticipate or plan for a cyber disaster is the WannaCry ransomware attack of May 2017. Within the course of a single weekend, the WannaCry perpetrators held hostage over 300,000 organizations in 150 countries of the world— including about a third of the hospital facilities operated by the United Kingdom’s National Health Service (NHS). By exploiting known security vulnerabilities in outdated Microsoft Windows software implementations, the virus took control of each affected computer, encrypted all data, effectively paralyzing operations, as the attacker demanded ransom payment of $300 to be paid in bitcoin. The response was one of reaction to panic instead of a planned response to emergency. Following a review of the incident, the National Audit Office noted that the impact of WannaCry could have been avoided if basic security practices had been applied (Palmer, 2017).

The bottom line is that computer systems, networks, software applications, medical devices, people, and enterprises are highly complex, and the only safe assumption is that “things will go wrong.” Trustworthiness is an essential attribute for the systems, software, devices, services, processes, and people used to manage individuals’ personal health information and to help provide safe, high-quality health care.


Trustworthiness can never be achieved by implementing a few policies and procedures, and purchasing some security technology whose sales representatives portrayed as a “total solution.” Protecting sensitive and safety-critical health information and assuring that the systems, services, and information that nurses rely upon to deliver quality care are available when they are needed, require a complete HIT trust framework that starts with an objective assessment of risk, and that is conscientiously applied throughout the development and implementation of policies, operational procedures, and security safeguards built on a solid system architecture. This trust framework is depicted in Fig. 10.1 and comprises seven layers of protection, each of which is dependent upon the layers below it (indicated by the arrows in the figure), and all of which must work together to provide a trustworthy HIT environment for healthcare delivery. This trust framework does not dictate a physical architecture; it may be implemented within a single medical practice or across multiple institutional sites, and may comprise enterprise, mobile, device, and cloud components.

Layer 1: Risk Management

Risk management is the foundation of the HIT trust framework. Objective risk assessment informs decision-making and positions the organization to correct those physical, operational, and technical deficiencies that pose the highest risk to the information assets within the enterprise. The identification and valuation of an enterprise’s data and software applications is a critical step in risk assessment. In today’s healthcare environments, ongoing risk assessment is a cornerstone of cybersecurity protection.


• FIGURE 10.1. The trust framework comprises a layered approach to achieving a trustworthy health system environment.

Objective risk assessment also puts into place protections that will enable the organization to manage any residual risk and liability. Patient safety, individual privacy, and information security all relate to risk, which is simply the probability that some “bad thing” will happen. Risk is always considered with respect to a given context comprising relevant threats, vulnerabilities, and valued assets. Threats can be natural occurrences (e.g., earthquake, hurricane), accidents, or malicious people and software programs. Vulnerabilities are present in facilities, hardware, software, virtual environments (i.e., “clouds”), communication systems, business processes, and people. Valued assets can be anything from reputation to business infrastructure to information to human lives.

A security risk is the probability that a threat will exploit a vulnerability to expose confidential information, corrupt or destroy data (i.e., digitized information), or interrupt or deny essential information services. If that risk could result in the unauthorized disclosure of an individual’s private health information, the compromise of an individual’s identity, or a denial of an individual’s rights to control information about them, it also represents a privacy risk. If the risk could result in the corruption of clinical data or an interruption in the availability of a safety-critical system, causing human harm or the loss of life, it is a safety risk as well.

Information security (sometimes referred to as “cybersecurity”) is widely viewed as the protection of information confidentiality, data integrity, and service availability. Indeed, these are the three areas directly addressed by the technical safeguards addressed in the HIPAA Security Rule (CFR, 2013). Generally, safety is most closely associated with protective measures for data integrity and the availability of life-critical information and services, while privacy is more often linked to confidentiality protection. However, the unauthorized exposure of private health information, or corruption of one’s personal EHR as a result of an identity theft, also can put an individual’s health and safety at risk.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 29, 2021 | Posted by in NURSING | Comments Off on Trustworthy Systems for Safe and Private Healthcare

Full access? Get Clinical Tree

Get Clinical Tree app for offline access