Artificial Intelligence in Healthcare


37


Artificial Intelligence in Healthcare



Eileen Koski / Judy Murphy



INTRODUCTION



The term artificial intelligence (AI) is used to describe a computer program or system that can learn and make decisions based on its own accumulated experience. This is the primary capability that distinguishes AI from an expert, decision support, or rules-based systems, which are based on expert human reasoning (Weiss & Kulikowski, 1991).


1.   How has this definition changed over time?


In some ways, the definition itself has never really changed, but as systems have become more powerful and sophisticated, expectations for the extent of capabilities that might be possible have continued to expand. The earliest formulations of the concept were that a computing machine could process information in such a way that it went beyond merely improving on the speed and accuracy of computational task programmed by humans to a state where its processes so closely resembled human decision-making that it could fool a human observer (Turing, 1950). As the speed and accuracy with which computers could process rules programmed for them increased, the concept was expanded to the idea that computers could both learn and extrapolate, essentially applying the rules created for one application to a different situation. Eventually, the definition evolved to mean a system that could digest information and extract pertinent concepts and outcomes on its own to apply to this process.


2.   What is specifically meant by the concept of “AI in Healthcare?”


When we speak of AI in healthcare, we are essentially referring to the application of AI methodologies to a variety of significant challenges in healthcare. Many systems referred to today as AI in healthcare most closely resemble expert systems— i.e., rules-based systems. Such systems have been programmed with extensively detailed instructions that can be processed more rapidly, efficiently, precisely, and consistently than a human could, but with essentially no interpretation beyond the scope of the original programming and limited by the knowledge of the experts from whom the rules were obtained. Newer systems can learn and offer novel insights based on digesting, parsing, combining, and evaluating data directly. In practice, there is no sharp dividing line between expert-driven and data-driven systems, as most modern systems combine multiple modalities.


A BRIEF HISTORY OF AI



The concept of artificial intelligence (AI) has been around since well before the birth of what we now think of as computers. Early “computers” or “calculators” were people who were skilled at mathematical computations. The early computing machines were primarily intended to enhance these human abilities with respect to speed, precision, and accuracy, but were not viewed as having intelligence per se.


Much of the earliest work that formed the basis of what we now think of as artificial intelligence comes from advances in many fields including engineering, biology (neural networks in single-cell organisms), experimental psychology, communication theory, game theory (notably John Von Neumann and Oskar Morgenstern), mathematics and statistics, logic and philosophy (e.g., Alan Turing, Alonzo Church, and Carl Hempel), and linguistics (Noam Chomsky’s work on grammar) (Buchanan, 2005).


In 1950, in his seminal paper “Computing Machinery and Intelligence” (Turing 1950), Alan Turing posed the question of whether a machine could think. In this paper, he described what he called the “imitation game” in which a computer is programmed to behave like a human in the context of a game. In his broader, philosophical exploration of this concept, Turing raised the question of whether a computer can learn, which remains one of the hallmarks of any definition of AI.


Many realized that much of what Turing envisioned relied on the development of faster computing machinery as well as improved coding techniques. Progress continued along these lines, and a small group organized the Dartmouth AI Workshop in 1956, which is considered by many to be the seminal event in the formation of AI as a discipline (McCarthy, Minsky, Rochester, & Shannon, 1955).


Over time, as different elements of a learning computer were developed, it was still believed that a computer could only follow instructions and could not compete in arenas believed to require the type of mental capacities that rely on complex strategies and intuition, such as playing chess. That notion was dispelled in 1997 when IBM’s Deep Blue computer bested the world chess champion, Garry Kasparov (Weber, 1997). Even so, that system was essentially a rules-based system, albeit one that could evaluate virtually all known strategies previously used in a given situation and select the best one in a fraction of a second, but it did not create novel strategies on its own.


One of the more intriguing recent developments in 2017 was the decision by a group of engineers at Google DeepMind to build a machine that could play the game Go, not by training it on all the famous strategies used by grandmasters of the game, but by teaching it the rules of the game, training it with relatively low level matches, and then allowing it to play against another computer and learn on its own. This machine, Alpha Go, was able to defeat human grandmasters using strategies that completely confounded established theories of the game, clearly demonstrating that they could not have been pre-programmed into the machine (Sheldon, 2017, Silver et al., 2017).


The rules of chess and Go are well known, even if the strategies for playing both games are extraordinarily complex and varied. In thinking about applying similar AI strategies to healthcare, it is important to remember that we have not yet uncovered all of the rules that govern how our bodies and minds function—and malfunction. Even so, as we continue to learn, we can certainly build on these principles moving forward.


FOUNDATIONAL CONCEPTS IN AI



The following are 10 of the foundational concepts and methodologies that form the basis of AI applications in healthcare:


1.   Natural Language Processing (NLP): Automated language analysis intended to parse unstructured text to respond to queries or otherwise extracting data in analyzable form (Sager, Lyman, Buchnall, Nhan, & Tick, 1994). In healthcare, this typically refers to the process of extracting salient clinical concepts such as symptoms, diagnoses, and treatments from the narrative text, such as clinical notes.


2.   Medical Language Processing: A term used to describe the application of NLP to address issues specific to medical data (Friedman, 1997; Sager, Lyman, Nhan, & Tick, 1995).


3.   Classifiers: Processes that map input data into categories or classes, also referred to as predictions. Classifiers are trained on a data set for which the proper classification is known, i.e., labeled, so that new and unlabeled data can be correctly categorized (Weiss & Kulikowski, 1991). For example, diagnosis or relevant clinical outcome would be viewed as a classification or prediction. A set of data on a group of patients with a range of symptoms and other relevant characteristics that also include a clinician’s or expert’s diagnosis would be considered “labelled” data that could then train a system to predict which diagnosis would apply to patients in another data set with those same characteristics, but without a specified diagnosis.


4.   Artificial Neural Networks: Computer systems loosely modeled on biological nervous systems that are a type of classifier, but in which no assumption is made about the underlying distribution of the population to which they are applied (Weiss & Kulikowski, 1991, pp. 81–82), allowing them to be applied to more complex data distributions.


5.   Machine Learning (ML): An automated system able to process large volumes of data and extract meaningful information from it (data mining) as well as to use this information to address practical problems (decision support) (Weiss & Kulikowski, 1991, pp. 113–138). Machine learning is generally divided into supervised learning, which uses expert knowledge to guide its decision-making processes and which requires labeled data, and unsupervised learning, which is more oriented toward discovery of previously undefined patterns derived directly from the data itself.


6.   Deep Learning: The process of employing multi-layered deep neural networks (DNNs), allowing integration of multiple data types. Deep learning can utilize supervised or unsupervised learning of feature presentations in each layer (Yu & Deng, 2011)


7.   Cognitive Computing: A term applied to computing that can involve multiple. AI methodologies applied in such a way as to replicate the human cognitive performance. Such systems can be capable of examining both questions and possible solutions from new perspectives allowing potentially novel, data-driven, as opposed to an expert-driven solution (Marshall, Champagne-Langabeer, Castelli, & Hoelscher, 2017).


8.   Augmented Intelligence: Technology that is intended to assist humans in utilizing or extending their capabilities, i.e., assistive technologies (Information Week, 2018)


9.   Image Analysis: The process of extracting meaningful information specifically from images as opposed to numeric, categorical, or text data. In healthcare, it is typically associated with the analysis of highly complex digital diagnostic data, such as MRI images and ultrasound scan data.


10.   Speech Analysis: Similar to image analysis, however, focused on extracting meaningful diagnostic and prognostic insights from patterns discernible in recorded speech (Corcoran et al., 2018).


HEALTHCARE VALUE PROPOSITIONS



There are numerous challenges in healthcare today that can potentially benefit from AI and others that may render AI virtually indispensable. While the science and practice of medicine have continually advanced, the reality is that our healthcare systems do not function as well as is needed, populations are not served equitably, the cost of healthcare is spiraling out of control and yet, by many standards, the health of our population is not what it should be. AI cannot address all of the societal, political, and environmental issues at play. However, there are many ways that AI can contribute to increasing efficiency, raising standards of care, delivering on the promise of precision medicine, and supporting research.


While they are all deeply interrelated and overlapping, conceptually there are three major classes of activities in particular which are driving the need for AI solutions:


1.   Information synthesis


2.   Augmenting human performance


3.   Surveillance


Information Synthesis


On a very basic level, due to the dramatic increase in the amount of medically relevant data that is generated each year, there is too much information to be handled without computational help. Much of this increase falls into three categories:


1.   Patient Data: Data specifically generated on or by patients, as individuals and as populations, has dramatically increased. Data on patients and groups of individuals include data both intentionally and traditionally generated for medical purposes, such as test results, diagnoses, treatments, and medical histories, as well as lifestyle data on behaviors such as diet and exercise that can have medical utility or implications. Patient data may now include such data as continuous biological measurements, self-reported data, sensor data, images, audio recordings, etc.


2.   Data Complexity: The increasing complexity of the individual data elements that are now being stored electronically has expanded exponentially. The introduction of such data-rich testing methodologies as gene sequencing and MRIs represents both a qualitative and quantitative increase in the complexity of what could theoretically be viewed as single data elements, i.e., test results. As lifespans have increased and populations have aged, the number of people with multiple comorbidities has increased, yielding complex treatment regimens with more significant risks of interaction effects.


3.   Literature: The amount of medical literature published each year continues to rise. Well over 1 million citations are added to MEDLINE/PubMed each year based on statistics published by the U.S. National Library of Medicine (NLM)(National Library of Medicine, 2017). The annual number of citations increased over 70% in the 10-year period from 2006 (688,444) to 2016 (1,178,360). While no researcher or clinician would need to read all literature published in any given year, a 2004 study estimated that a physician trained in epidemiology would need to spend 627.5 h per month to keep up with new professional insights that should be incorporated into their clinical knowledge base (Alper et al., 2004). Given the 70% increase in annual citations since that study was done, this would undoubtedly require over 1000 h per month by now, far exceeding the total number of hours in a month.


Augmenting Human Performance


Augmenting human performance is perhaps the oldest and most straightforward application of AI in healthcare.


In clinical settings, given the amount and complexity of data generated in healthcare today, it is not possible for even the most skilled clinicians to successfully digest all available information. Even if they could, not all clinicians have the same level of experience to inform them in this process. In cases of rare diseases or unusual presentations of common diseases, in particular, a patient’s presentation may fall well outside of most clinicians’ experience, often leading to delays or errors in diagnosis and treatment. The problem can be exacerbated in the case of emergency departments or underserved areas where clinicians may need to treat a wide array of problems. Treatment decisions are further complicated by the increasing number of patients with multiple comorbidities, increasing the likelihood of interaction effects that must be considered.


The impact of medical errors was well documented in the landmark 2000 Institute of Medicine publication To Err Is Human (Institute of Medicine, 2000), and a call to improve quality was issued in their follow-up publication, Crossing the Quality Chasm (Institute of Medicine, 2001). These publications focused on addressing deficiencies in the quality of care, but the 2015 publication of Improving Diagnosis in Healthcare (National Academy of Sciences, Engineering, and Medicine, 2015) emphasized the continuing problems that arise specifically from an initial missed or delayed diagnosis. Such a delay typically leads to delays in correct treatment, which can, in turn, lead to negative outcomes, psychological stress, and excess costs potentially due to both wasted, incorrect treatments as well as the costs of treating a condition at a later stage than might have been possible with an earlier diagnosis.


In research settings, many of the current issues related to the challenge of accelerating progress toward the application of personalized and precision medicine against the backdrop of the costly and time-consuming process of running clinical trials.


Surveillance

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 29, 2021 | Posted by in NURSING | Comments Off on Artificial Intelligence in Healthcare

Full access? Get Clinical Tree

Get Clinical Tree app for offline access