Exploring Human Potential

Artificial Intelligence (AI) and The Future of Medicine

Mike Magee MD

Address at Presidents College, University of Hartford (May,17, 2024)

The history of Medicine has always involved a clash between the human need for compassion, understanding, and partnership, and the rigors of scientific discovery and advancing technology. At the interface of these two forces are human societies that struggle to remain forward looking and hopeful while managing complex human relations. It is a conflict in many ways, to hold fear and worry at bay while imagining better futures for individuals, families, communities and societies, that challenges leaders pursuing peace and prosperity.

The question has always been “How can science and technology improve health without undermining humans freedom of choice and rights to self-determination.” The rapid rise of Artificial Intelligence (AI) feels especially destabilizing because it promises, on the one hand, great promise, and on the other, great risk. The human imagination runs wild, conjuring up images of robots taking over the world and forcing humankind into submission. In response, over the next 90 minutes, we will take a “deep breath” and place science’s technologic progress in perspective.

Homo sapiens capacity to develop tools of every size and shape, to expand our reach and control over other species and planetary resources, has allowed our kind to not only survive but thrive. AI is only the latest example. This is not a story of humanoid machines draped in health professional costuming with stethoscopes hanging from mechanical necks. And it is not the wonder of virtual images floating in thin air, surviving in some alternate reality or alternate plane, threatening to ultimately “come alive” and manage us.

At its core, AI begins very simply with language which we’ll get to in a moment. Starting with the history of language, before we are done, we’ll introduce the vocabulary and principles of machine learning, its’ potential to accelerate industrialization and transform geographic barriers, the paradox that technologic breakthroughs often under-perform when it comes to human productivity, the “dark side” of AI’s or their known capacity to “hallucinate,” and some projections of what our immediate future may look like as Medicine, which controls 1/5 of our GDP, incorporates AI into its daily life.

Language and speech in the academic world is not a simple topic. It is a complex field that goes well beyond paleoanthropology and primatology. Experts in the field require a working knowledge of “Phonetics, Anatomy, Acoustics and Human Development, Syntax, Lexicon, Gesture, Phonological Representations, Syllabic Organization, Speech Perception, and Neuromuscular Control.”

Until 2019, it was generally accepted dogma that “Humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates.” This human exceptionalism, the theory went, allowed the production of vowels some 300,000 years ago. From this anatomic fortune came our capacity for utterances, which over time became words and languages. Whether language enlarged the human brain, or an enlarging brain allowed for the development of language didn’t really matter. What was more important was that the ability to communicate with each other, most agreed, was the keys to the universe.

Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.” But in 2019, a study in ScienceAdvances titled “Which way to the dawn of speech?: Reanalyzing a half a century of debates and data light of speech science”  definitively established that human speech or primate vocalization appeared at least three million years ago. That paper made three major points:

1. Among primates, laryngeal descent is not uniquely human.

2. Laryngeal descent is not required to produce contrasting patterns in vocalizations.

3. Living nonhuman primates produce vocalizations with contrasting formant patterns.

Translation: We’re not so special after all.

Along with these insights, experts in ancient communications imagery traced a new theoretical route “From babble to concordance to inclusivity…” One of the leaders of that movement, paleolithic archeologist, Paul Pettit PhD, put a place and a time on this human progress when he wrote in 2021, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.”

Without knowing it, Dr. Pettit had provided a perfect intro to Google Chair, Gundar Pichai, who two years later, in a introduction of Google’s knew AI product, Gemini, described their new offering as “our largest and most capable AI model with natural image, audio, an video, understanding and reasoning.” This was, by way of introducing a new AI term, “multimodal.”

Google found itself in the same creative space as rival OpenAI which had released its’ first Large Language Model (LLM) marvel, ChatGPT, to rave reviews in 2019.

What we call AI or “artificial intelligence” is actually a 70-year old concept that used to be called “deep learning.” This was the brain construct of University of Chicago research scientists, Warren McCullough and Walter Pitts, who developed the concept of  “neural nets” in 1944. They modeled the theoretical machine learner after human brains, consistent of multiple overlapping transit fibers, joined at synaptic nodes which, with adequate stimulus, could allow gathered information to pass on to the next fiber down the line.

On the strength of that concept, the two moved to MIT in 1952 and launched the Cognitive Science Department uniting computer scientists and neuroscientists. In the meantime, Frank Rosenblatt, a Cornell psychologist, invented the “first trainable neural network” in 1957 termed futuristically, the “Perceptron” which included a data input layer, a sandwich layer that could adjust information packets with “weights” and  “firing thresholds”, and a third output layer to allow data that met the threshold criteria to pass down the line.

Back at MIT, the Cognitive Science Department was in the process of being hijacked in 1969 by mathematicians Marvin Minsky and Seymour Papert, and became the MIT Artificial Intelligence Laboratory. They summarily trashed Rosenblatt’s Perceptron machine believing it to be underpowered and inefficient in delivering the most basic computations. By 1980, the department was ready to deliver a “never mind,” as computing power grew and algorithms for encoding thresholds and weights at neural nodes became efficient and practical.

The computing leap, experts now agree, came “courtesy of the computer-game industry” whose “graphics processing unit” (GPU), which housed thousands of processing cores on a single chip, was effectively the neural net that McCullough and Pitts had envisioned. By 1977, Atari had developed game cartridges and microprocessor-based hardware, with a successful television interface.

Experts say that the modern day beneficiary of the GPU is Nvidia, “founded in 1993 by a Taiwanese-American electrical engineer named Jensen Huang, initially focused on computer graphics. Driving high-resolution graphics for PC games requires particular mathematical calculations, which are more efficiently run using a ‘parallel’ system. In such a system, multiple processors simultaneously run smaller calculations that derive from a larger, more complicated problem.

In the 21st century, along came machine learning: a subset of AI that involves training algorithms to learn from data and to extrapolate from it. By chance, machine learning too is the kind of computation that requires many quick and simultaneous calculations, making it amenable to the type of parallel architecture Nvidia’s chips provide”

With the launch of the Internet, and the commercial explosion of desk top computing, language – that is the fuel for human interactions worldwide – grew exponentially in importance. More specifically, the greatest demand was for language that could link humans to machines in a natural way.

With the explosive growth of text data, the focus initially was on Natural Language Processing(NLP), “an interdisciplinary subfield of computer science and linguistics primarily concerned with giving computers the ability to support and manipulate human language.” Training software initially used annotated or referenced texts to address or answer specific questions or tasks precisely. The usefulness and accuracy to address inquiries outside of their pre-determined training was limited and inefficiency undermined their usage.

But computing power had now advanced far beyond what Warren McCullough and Walter Pitts could have possibly imagined in 1944, while the concept of “neural nets” couldn’t be more relevant. IBM describes the modern day version this way:

“Neural networks …are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another… Artificial neural networks are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer…Once an input layer is determined, weights are assigned. These weights help determine the importance of any given variable, with larger ones contributing more significantly to the output compared to other inputs. All inputs are then multiplied by their respective weights and then summed. Afterward, the output is passed through an activation function, which determines the output. If that output exceeds a given threshold, it “fires” (or activates) the node, passing data to the next layer in the network… it’s worth noting that the “deep” in deep learning is just referring to the depth of layers in a neural network. A neural network that consists of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. A neural network that only has two or three layers is just a basic neural network.”

The bottom line is that the automated system responds to an internal logic. The computers “next choice” is determined by how well it fits in with the prior choices. And it doesn’t matter where the words or “coins” come from. Feed it data, and it will “train” itself; and by following the rules or algorithms imbedded in the middle decision layers or screens, it will “transform” the acquired knowledge, into “generated” language that both human and machine understand.

In 2016, a group of tech entrepreneurs including Elon Musk and Reed Hastings, believing AI could go astray if restricted or weaponized, formed the non-profit called OpenAI. Two years later they released the deep learning product called Chat GPT.  This solution was born out of the marriage of Natural Language Processing and Deep Learning Neural Links with a stated goal of “enabling humans to interact with machines in a more natural way.”

The GPT stood for “Generative Pre-trained Transformer.”  Built into the software was the ability to “consider the context of the entire sentence when generating the next word” – a tactic known as “auto-regressive.” As a “self-supervised learning model,” GPT is able to learn by itself from ingesting or inputing huge amounts of anonymous text; transform it by passing it through a variety of intermediary weighed screens that jury the content; and allow passage (and survival) of data that is validated. The resultant output? High output language that mimics human text.

Leadership in Microsoft was impressed, and in 2019 ponied up $1 billion to jointly participate in development of the product and serve as their exclusive Cloud provider with a 10% stake in the non-profit corporation.

The first GPT released by OpenAI was GPT-1 in 2018. It was trained on an enormous BooksCorpus dataset. Its’ design included an input and output layer, with 12 successive transformer layers sandwiched in between. It was so effective in Natural Language Processing that minimal fine tuning was required on the back end.

One year later, OpenAI released version two, called GPT-2, which was 10 times the size of its predecessor with 1.5 billion parameters, and the capacity to translate and summarize. A year later GPT-3 was released in 2020. It had now grown to 175 billion parameters, 100 times the size of GPT-2, and was trained by ingesting a corpus of 500 billion content sources (including those of my own book – CODE BLUE). It could now generate long passages on verbal demand, do basic math, write code, and do (what the inventors describe as) “clever tasks.” An intermediate GPT 3.5 absorbed Wikipedia entries, social media posts and news releases.

On March 14, 2023, GPT-4 went big language, now with multimodal outputs including text, speech, images, and physical interactions with the environment. This represents an exponential convergence of multiple technologies including databases, AI, Cloud Computing, 5G networks, personal Edge Computing, and more.

 The New York Times headline announced it as “Exciting and Scary.”  Their technology columnist wrote, “What we see emerging are machines that know how to reason, are adept at all human languages, and are able to perceive and interact with the physical environment.” He was not alone in his concerns. The Atlantic, at about the same time, ran an editorial titled, “AI is about to make social media (much) more toxic.

Leonid Zhukov, Ph.D, director of the Boston Consulting Group’s (BCG) Global AI Institute, believes  offerings like ChatGPT-4 and Genesis “have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

Were he alive, Leonardo da Vinci, would likely be unconcerned. Five hundred years ago, he wrote nonchalantly, “It had long come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things.”

Atul Butte, MD, PhD, Chief Data Scientist at UCSF is clearly on the same wavelength. His relatively small hospital system still is a treasure trove of data including over 9 million patients, 10 hospitals, 500 million patient visits, 40,000 DNA genomes, 1.5 billion prescriptions, and 100 million outpatient visits. He is excited about the AI future, and says: “Why is AI so exciting right now? Because of the notes. As you know, so much clinical care is documented in text notes. Now we’ve got these magical tools like GPT and large language models that understand notes. So that’s why it’s so exciting right now. It’s because we can get to that last mile of understanding patients digitally that we never could unless we hired people to read these notes. So that’s why it’s super exciting right now.”

Atul is looking for answers, answers derived from AI driven data analysis. What could he do? He answers:

“I could do research without setting up an experiment.”

“I could scale that privilege to everyone else in the world.”

“I could do research without setting up an experiment.”

“I could measure health impact while addressing fairness, ethics, and equity.”

His message in three words: “You are data!” If this feels somewhat threatening, it is because it conjures up the main plot line from Frank Oz’s classic, Little Shop of Horror, where Rich Moranis is literally bled dry by “Audrey II”, an out-of-control Venus Fly Trap with an insatiable desire for human blood.

Data, in the form of the patient chart, has been a feature of America’s complex health system since the 1960’s. But it took a business minded physician entrepreneur from Hennepin County, Minnesota, to realize it was a diamond in the rough. The Founder of Charter Med Inc., a small, 1974  physician practice start-up, declared with enthusiasm that in the near future “Data will be king!”

Electronic data would soon link patient “episodes” to insurance payment plans, pharmaceutical and medical device products, and closed networks of physicians and other “health providers. His name was Richard Burke, and three years after starting Charter Med., it changed its name to United Health Group. Fortry-five years later, when Dr. Burke retired as head of United Healthcare, the company was #10 on the Fortune 500 with a market cap of $477.4 billion.

Not that it was easy. It was a remarkably rocky road, especially through the 1980s and 1990s. When he began empire building, Burke was dependent on massive, expensive main frame computers, finicky electronics, limited storage capacity, and resistant physicians. But by 1992, the Medical establishment and Federal Government decided the Burke was right, and data was the future.

The effort received a giant boost that year from the Institute of Medicine which formally recommended a conversion over time from a paper-based to an electronic data system. While the sparks of that dream flickered, fanned by “true-believers” who gathered for the launch of the International Medical Informatics Association (IMIA), hospital administrators dampened the flames siting conversion costs, unruly physicians, demands for customization, liability, and fears of generalized workplace disruption.

True believers and tinkerers chipped away on a local level. The personal computer, increasing Internet speed, local area networks, and niceties like an electronic “mouse” to negotiate new drop-down menus, alert buttons, pop-up lists, and scrolling from one list to another, slowly began to convert physicians and nurses who were not “fixed” in their opposition.

On the administrative side, obvious advantages in claims processing and document capture fueled investment behind closed doors. If you could eliminate filing and retrieval of charts, photocopying, and delays in care, there had to be savings to fuel future investments.

What if physicians had a “workstation,” movement leaders asked in 1992? While many resisted, most physicians couldn’t deny that the data load (results, orders, consults, daily notes, vital signs, article searches) was only going to increase. Shouldn’t we at least begin to explore better ways of managing data flow? Might it even be possible in the future to access a patient’s hospital record in your own private office and post an order without getting a busy floor nurse on the phone?

By the early 1990s, individual specialty locations  in the hospital didn’t wait for general consensus. Administrative computing began to give ground to clinical experimentation using off the shelf and hybrid systems in infection control, radiology, pathology, pharmacy, and laboratory. The movement then began to consider more dynamic nursing unit systems.

By now, hospitals legal teams were engaged. State laws required that physicians and nurses be held accountable for the accuracy of their chart entries through signature authentication. Electronic signatures began to appear, and this was occurring before regulatory and accrediting agencies had OK’d the practice.

By now medical and public health researchers realized that electronic access to medical records could be extremely useful, but only if the data entry was accurate and timely. Already misinformation was becoming a problem.

Whether for research or clinical decision making, partial accuracy was clearly not good enough. Add to this a sudden explosion of offerings of clinical decision support tools which began to appear, initially focused on prescribing safety featuring flags for drug-drug interactions, and drug allergies. Interpretation of lab specimens and flags for abnormal lab results quickly followed.

As local experiments expanded, the need for standardization became obvious to commercial suppliers of Electronic Health Records (EHRs). In 1992, suppliers and purchasers embraced Health Level Seven (HL7) as “the most practical solution to aggregate ancillary systems like laboratory, microbiology, electrocardiogram, echocardiography, and other results.” At the same time, the National Library of Medicine engaged in the development of a Universal Medical Language System (UMLS).

As health care organizations struggled along with financing and implementation of EHRs, issues of data ownership, privacy, informed consent, general liability, and security began to crop up.  Uneven progress also shed a light on inequities in access and coverage, as well as racially discriminatory algorithms.

In 1996, the government instituted HIPPA, the Health Information Portability and Accountability Act, which focused protections on your “personally identifiable information” and required health organizations to insure its safety and privacy.

All of these programmatic challenges, as well as continued resistance by physicians jealously guarding “professional privilege, meant that by 2004, only 13% of health care institutions had a fully functioning EHR, and roughly 10% were still wholly dependent on paper records. As laggards struggled to catch-up, mental and behavioral records were incorporated in 2008.

A year later, the federal government weighed in with the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH). It incentivized organizations to invest in and document “EHRs that support ‘meaningful use’ of EHRs”. Importantly, it also included a “stick’ – failure to comply reduced an institution’s rate of Medicare reimbursement.

By 2016, EHRs were rapidly becoming ubiquitous in most communities, not only in hospitals, but also in insurance companies, pharmacies, outpatient offices, long-term care facilities and diagnostic and treatment centers. Order sets, decision trees, direct access to online research data, barcode tracing, voice recognition and more steadily ate away at weaknesses, and justified investment in further refinements. The health consumer, in the meantime, was rapidly catching up. By 2014, Personal Health Records, was a familiar term. A decade later, they are a common offering in most integrated health care systems.

All of which brings us back to generative AI.  New multimodal AI entrants, like ChatGPT-4 and Genesis, are now driving our collective future. They will not be starting from scratch, but are building on all the hard fought successes above. Multimodal, large language, self learning mAI is limited by only one thing – data. And we are literally the source of that data. Access to us – each of us and all of us – is what is missing.

Health Policy experts in Washington are beginning to silently ask, “What would you, as one of the 333 million U.S. citizens in the U.S., expect to offer in return for universal health insurance and reliable access to high quality basic health care services?”

Would you be willing to provide full and complete de-identified access to all of your vital signs, lab results, diagnoses, external and internal images, treatment schedules, follow-up exams, clinical notes, and genomics?  An answer of “yes” could easily trigger the creation of universal health coverage and access in America.

The Mayo Clinic is not waiting around for an answer. They recently announced a $5 billion “tech-heavy” AI transformation of their Minnesota campus. Where’s the money for the conversion coming from? Their chief investor is Google with its new Genesis multimodal AI system. Chris Ross, the Chief Investment Officer at the Mayo Clinic says, “I think it’s really wonderful that Google will have access and be able to walk the halls with some of our clinicians, to meet with them and discuss what we can do together in the medical context.” Cooperation like that he predicts will generate “an assembly line of AI breakthroughs…”

So AI progress is here. But Medical Ethicists are already asking about the impact on culture and values. They wonder who exactly is leading this revolution. Is it, as David Brooks asked in a recent New York Times editorial, “the brain of the scientist, the drive of the capitalist, or the cautious heart of the regulatory agency?” DeKai PhD, a leader in the field, writes, “We are on the verge of breaking all our social, cultural, and governmental norms. Our social norms were not designed to handle this level of stress.” Elon Musk added in 2018, “Mark my words, AI is far more dangerous than nukes. I am really quite close to the cutting edge in AI, and it scares the hell out of me.”

Still, few deny the potential benefits of this new technology, and especially when it comes to Medicine. What could AI do for healthcare?

  1. “Parse through vast amounts of data.”
  2. “Glean critical insights.”
  3. “Build predictive models.”
  4. “Improve diagnosis and treatment of diseases.”
  5. “Optimize care delivery.”
  6. “Streamline tasks and workflows.”

So most experts have settled on full-speed ahead – but with caution. This is being voiced at the highest levels. Consider this exchange during a 2023 Podcast, hosted by Open AI CEO, Sam Altman. His guest was MIT scientist, Lex Friedman who reflected, “You sit back, both proud, like a parent, but almost like proud and scared that this thing will be much smarter than me. Like both pride and sadness, almost like a melancholy feeling, but ultimately joy. . .”  And Altman responded, “. . . and you get more nervous the more you use it, not less?” Friedman’s simple reply, “…Yes.”

And yet, both would agree, literally and figuratively, “Let the chips fall where they may.” “Money will always win out,” said The Atlantic in 2023. As the 2024 STAT Pulse Check (“A Snapshot of Artificial Intelligence in Health Care”) revealed, over 89% of Health Care execs say “AI is shaping the strategic direction of our institution.” The greatest level of activity is currently administrative, activities like Scheduling, Prior Authorization, Billing, Coding, and Service Calls. But clinical activities are well represented as well. These include Screening Results, X-rays/Scans, Pathology, Safety Alerts, Clinical Protocols, Robotics.  Do the doctors trust what they’re doing. Most peg themselves as “cautiously moderate.”

A recent JAMA article described it this way, ”The relationship between humans and machines is evolving. Traditionally humans have been the coach while machines were players on the field. However, now more than ever, humans and machines are becoming akin to teammates.” The AMA did its own “Physician Sentiment Analysis” in late 2023, capturing the major pluses and minuses in doctors’ views. Practice efficiency and diagnostic gains headed the list of “pros.” 65% envisioned less paperwork and phone calls. 72% felt AI could aid in accurate diagnosis, and 69% saw improvements in “workflow” including Screening Results, X-rays/Scans, Pathology, Safety Alerts, and Clinical Protocols. As for the “cons” – the main concerns centered around privacy, liability, patient trust, and the potential to further distance them from their patients.

What most everyone understands by now, however, is that it will happen. It is happening. So let’s have a look at four case studies that reinforce the fact that the future has arrived in health care.

Case I: Sickle Cell Anemia. This month, a 12 year old boy became the first patient with sickle cell anemia to enter potentially curable gene altering therapy. This advance was made possible by AI. As the Mayo Clinic describes, “Sickle cell anemia is one of a group of inherited disorders known as sickle cell disease. It affects the shape of red blood cells, which carry oxygen to all parts of the body. Red blood cells are usually round and flexible, so they move easily through blood vessels. In sickle cell anemia, some red blood cells are shaped like sickles or crescent moons. These sickle cells also become rigid and sticky, which can slow or block blood flow. The current approach to treatment is to relieve pain and help prevent complications of the disease. However, newer treatments may cure people of the disease.”

The problem is tied to a single protein, hemoglobin. AI was employed to figure our exactly what causes the protein to degrade. That mystery goes back to the DNA that forms the backbone of our chromosomes and their coding instructions. If you put the DNA double helix “under a microscope, you uncover a series of chemical compounds called nucleotides. Each of four different nucleotides is constructed of 1 of 4 different nucleases, plus a sugar and phosphate compound. The 4 nucleases are cytosine, guanine, thymine and adenine. The coupling capacity of the two strands of DNA results from a horizontal linking of cytosine to guanine, and thymine to adenine. By “reaching across,” the double helix is established. The lengthen of each strand, and creation of the backbone that supports these nucleoside cross-connects, relies on the phosphate links to the sugar molecules.

Embedded in this arrangement is a “secret code” for the creation of all human proteins, essential molecules built out a collection of 20 different amino acids. What scientists discovered in the last century was that those four nucleosides, in links of three, created 64 unique groupings they called “codons.” 61 of the different 64 possibilities directed the future protein chain addition of one of the 20 amino acids (some have duplicates). Three of the codons are “stop codons” which end a protein chain.

To give just one example, a DNA chain “codon” of  “adenine to thymine to guanine” directs the addition of the amino acid “methionine” to a protein chain. Now to make our hemoglobin mystery just a bit more complicated, the hemoglobin molecule is made of four different protein chains, which in total contain 574 different amino acids. But these four chains are not simply laid out in parallel with military precision. No, their chemical structure folds spontaneously creating a 3-dimensional structure that effects their biological functions.

The very complex function of discovering new drugs required first that scientists understand what were the proteins, and how these proteins functioned, and then defining by laborious experimentation, the chemical and physical structure of the protein, and how a new medicine might be designed to alter or augment the function of the protein. At least, that was the process before AI.

In 2018, Google’s AI effort, titled “Deep Mind”, announced that its generative AI engine, when feed with an DNA codon database from human genomes, had taught itself how to predict or derive the physical folding structure of individual human proteins, including hemoglobin. The product, considered a breakthrough in biology, was titled “AlphaFold 2.”

Not to be outdone, their Microsoft supported competitor, OpenAI, announced a few years later, that their latest ChatGPT-3 could now “speak protein.” And using this ability, they were able to say with confidence that the total human genome collective harbored 71 million potential codon mistakes. As for you, the average human, you include 9000 codon mistakes in your personal genome, and thankfully, most of these prove harmless.

But in the case of Sickle Cell, that is not the case. And amazingly, ChatGPT-3 confirmed that this devastating condition was the result of a single condon mutation or mistake – the replacement of GTG for GAG altering 1 of hemoglobins 574 amino acids. Investigators and clinicians, with several years of experience under their belts in using a gene splitting technology called CRISPR (“Clustered Regularly Interspaced Short Palindromic Repeats”), were quick to realize that a cure for Sickle Cell might be on the horizon. On December 9, 2023, the FDA approved the first CRISPR gene editing treatment for Sickle Cell Disease. Five months later, the first patient entered treatment.

Case II: Facial Recognition Technology

Facial Recognition Technology (FRT) dates back to the work of American mathematician and computer scientist  Woodrow Wilson Bledsoe in 1960. His now primitive algorithms measured the distance between coordinates on the face, enriched by adjustments for light exposure, tilts of the head, and three dimensional adjustments. That triggered an unexpectedly intense commercial interest in potential applications primarily by law enforcement, security, and military clients.

The world of FRT has always been big business, but the emergence of large language models and sophisticated neural networks (like ChatGPT-4 and Genesis) have widened its audience well beyond security, with health care involvement competing for human and financial resources.

Whether you are aware of it or not, you have been a target of FRT. The US has the largest number of closed circuit cameras at 15.28 per capita, in the world. On average, every American is caught on a closed circuit camera 238 times a week, but experts say that’s nothing compared to where our “surveillance” society will be in a few years.

They are everywhere – security, e-commerce, automobile licensing, banking, immigration, airport security, media, entertainment, traffic cameras – and now health care with diagnostic, therapeutic, and logistical applications leading the way.

Machine learning and AI have allowed FRT to displace voice recognition, iris scanning, and fingerprinting.

Part of this goes back to Covid – and universal masking. FRT allowed “contactless” identity confirmation at a time when global societies were understandably hesitant to engage in any flesh to flesh contact. Might it be possible, even from a distance, to identify you from just a fragment of a facial image, even with most of your face covered by a mask?

The answer to that final question is what DARPA, the Defense Advanced Research Projects Agency, was attempting to answer in the Spring of 2020 when they funded researchers at Wuhan University. If that all sounds familiar, it is because the very same DARPA, a few years earlier, had quietly funded controversial “Gain of Function” viral re-engineering research by U.S. trained Chinese researchers at the very same university.

The pandemic explosion a few months later converted the entire local population to 100% mask-wearing, which made it an ideal laboratory to test whether FRT at the time could identify a specific human through partial periorbital images only. They couldn’t – at least not well enough. The studies revealed positive results only 39.55% of the time compared to full face success 99.77% of the time.

The field of mFRT is on fire. Emergen Research projects a USD annual investment of nearly $14 billion by 2028 with a Compound Annual Growth Rate of almost 16%. Detection, analysis and recognition are all potential winners. There are now 277 unique organizational investor groups offering “breakthroughs” in FRT with an average decade of experience at their backs.

Company names may not yet be familiar to all – like Megvii, Clear Secure, Any Vision, Clarify, Sensory, Cognitec, iProov, TrueFace, CareCom, Kairos – but they soon will be.

The medical research community has already expanded way beyond “contactless” patient verification. According to HIMSS Media , 86% of health care and life science organizations use some version of AI.

How comfortable is the FDA and Medical Ethics community with new, super-charged, medical Facial Recognition Technology (mFRT) that claims it can “identify the early stages of autism in infants as young as 12 months?” That test already has a name -the RightEye GeoPref Autism Test. Its’ UC San Diego designer says it was 86% accurate in testing 400 infants and toddlers.

Or how about Face2Gene which claims its’ mFRT tool already has linked half of the known human genetic syndromes to “facial patterns?”

Or how about employers using mFRT facial and speech patterns to identify employees likely to contract early dementia in the future, and adjusting career trajectories for those individuals. Are we OK with that?

What about your doctor requiring AiCure’s video mFRT to confirm that you really are taking your medications  that you say you are, are maybe in the future monitoring any abuse of alcohol?

How do we feel about mFRT use to diagnosis genetic diseases, disabilities, depression or Alzheimers, and using systems that are loosely regulated or unregulated by the FDA?

The sudden explosion of research into the use of mFRT to “diagnose genetic, medical and behavioral conditions” is especially troubling to Medical Ethicists who see this adventure as “having been there before,” and not ending well.

In 1872, it all began innocently enough with Charles Darwin’s publication of “The Expression of the Emotions in Man and Animals.” He became the first scientist to use photographic images to “document the expressive spectrum of the face” in a publication. Typing individuals through their images and appearance “was a striking development for clinicians.”

Darwin’s cousin, Francis Galton, a statistician, took his cousin’s data and synthesized “identity deviation” and “reverse-engineered” what he considered the “ideal type” of human, “an insidious form of human scrutiny” that would become Eugenics ( from the Greek word, “eugenes” – meaning “well born”).  Expansion throughout academia rapidly followed, and validation by our legal system helped spread and cement the movement to all kinds of “imperfection”, with sanitized human labels like “mental disability” and “moral delinquency.” Justice and sanity did catch up eventually, but it took decades, and that was before AI and neural networks. What if Galton had had Gemini Ultra “explicitly designed for facial recognition?”

Complicating our future further, say experts, is the fact that generative AI with its “deep neural networks is currently a self-training, opaque ‘black box’…incapable of explaining the reasoning that led to its conclusion…Becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.”

Laissez-faire as a social policy doesn’t seem to work well at the crossroads of medicine and technology. Useful, even groundbreaking discoveries, are likely on the horizon. But profit seeking mFRT entrepreneurs, in total, will likely add cost while further complicating an already beleaguered patient-physician relationship.

Case III: AI Assisted Surgery

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

It is increasingly obvious that the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flashy performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Johnson & Johnson’s AI surgery arm, MedTech, brands surgeons as “high-performance athletes” who are continuous trainers and learners…but also time-constrained “busy surgeons.” The heads of their AI business unit say that they want “to make healthcare smarter, less invasive, more personalized and more connected.” As a business unit, they decided to focus heavily of surgical education. “By combining a wealth of data stemming from surgical procedures and increasingly sophisticated AI technologies, we can transform the experience of patients, doctors and hospitals alike. . . When we use AI, it is always with a purpose.”

The surgical suite is no stranger to technology. Over the past few decades, lasers, laparoscopic equipment, microscopes, embedded imaging, all manner of alarms and alerts, and stretcher-side robotic work stations have become commonplace. It’s not like mAI is ACS’s first tech rodeo.

Mass General surgeon, Jennifer Eckhoff, MD,  sees the movement in broad strokes. “Not surprisingly, the technology’s biggest impact has been in the diagnostic specialties, such as radiology, pathology, and dermatology.” University of Kentucky surgeon, Danielle Walsh MD also chose to look at other departments. “AI is not intended to replace radiologists. – it is there to help them find a needle in a haystack.” But make no mistake, surgeons are aware that change is on the way. University of Minnesota surgeon, Christopher Tignanelli, MD’s, view is that the future is now. He says, “AI will analyze surgeries as they’re being done and potentially provide decision support to surgeons as they’re operating.”

AI robotics as a challenger to their surgical roles, most believe, is pure science fiction. But as a companion and team member, most see the role of AI increasing, and increasing rapidly in the O.R. The greater the complexity, the more the need. As Mass General’s Eckoff says, “Simultaneously processing vast amounts of multimodal data, particularly imaging data, and incorporating diverse surgical expertise will be the number one benefit that AI brings to medicine. . . Based on its review of millions of surgical videos, AI has the ability to anticipate the next 15 to 30 seconds of an operation and provide additional oversight during the surgery.”

As the powerful profit center for most hospitals, dollars are likely to keep up with visioning as long as the “dark side of AI” is kept at bay.  That includes “guidelines and guardrails” as outlined by new, rapidly forming elite academic AI collaboratives, like the Coalition for Health AI. Quality control, acceptance of liability and personal responsibility, patient confidence and trust, are all prerequisite. But the rewards, in the form of diagnostics, real-time safety feedback, precision and tremor-less technique, speed and efficient execution, and improved outcomes likely will more than make up for the investment in time, training, and dollars.

Case IV. Scalable Privilege

As we have seen Generative AI is a consumer of data. From that data, it draws conclusions, which yield more data. One of the troubling questions that has recently emerged is “What makes AI stop?” An answer to that question awaits further research. But what we know already is that AI output, while capable of “hallucinations” and missteps, more often exposes uncomfortable truths. And in doing this, may instigate needed change in societies like ours where self-interest and extreme profit-seeking have dampened our thirst for community and population health.

While the designers and funders of AI clearly have some level of altruism, it is important to be reminded that the primary motive, at least in the social network space is dollars – and specifically dollars coming from advertising. And one of the primary sources of those dollars over the past century – first in the form of newspapers, then radio, followed by television and internet social networks, has been pharmaceutical marketing.

The monetarization of social networks begins (but does not end) with search, which is now AI driven and multimodal. As Google said in 2024 with the launch of “Gemini,” “In the near term, Gemini is set to transform search, advertising, and productivity software like Gmail.” They were already chasing their competitor , OpenAI, funded with a $1 billion investment from Microsoft. Back in 2009, then Microsoft CEO, Steve Ballmer, announced its’ search engine, Bing, with some fanfare. But it stuttered and stalled under withering competition with its’ competitor as “Google it” became a household phrase. But beginning with the emergence of ChatGPT-1 in 2018, the search world changed. In March, 2023, Bing, powered by ChatGPT-4, exceeded 100 million active daily users.

Business consultants like BCG (Boston Consulting Group) took notice. Their 2024 AI analysis stated, “Success in the age of AI will largely depend on an organization’s ability to learn and change faster than it ever has before. . . Deploy, Reshape, Invent.” That report said the future for AI (and especially in health care) had arrived. They predicted “massive upskilling” supported with dollars and manning tables; “genie enabled assistants;” a “reshape play” that incorporated customer service, marketing, and software engineering: and efficiency gains of at least 30%.

What Academic Medicine has already noticed is AI’s exposure of racial bias built in to clinical algorithms used to drive decision making and care of patients over the past half century. With recognition has come correction. For example the American Academy of Pediatrics Clinical Protocol Algorithm’s for the evaluation and treatment of children who appeared in an Emergency Room with signs and symptoms of a urinary tract infection included a separate grading system for black children that led to systemic under-treatment compared to white children. Once updated, treatment level of black children rose from 17% to 65%.

In the 2022 paper that announced the AAP policy change in their journal, Pediatrics, the authors recounted that the racial bias was deeply seated and dated back to the birth of our nation. Thomas Jefferson his 1781 treatise, “Notes on the State of Virginia,” claimed that Black people had less kidney output, more heat tolerance, and poorer lung function tests than White individuals. As the AAP wrote, “Flawed science was used to solidify the permanence of race [and] reinforce the notions of racial superiority…. The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between black and white people to justify slavery.”

AI data analysis also found flaws in other areas. For example, algorithms for treatment of heart failure added three weighting points for non-blacks, assuring higher levels of therapeutic intervention. And algorithms for success in vaginal birth after prior C-section were scored lower if the mother was African American, assuring a higher level of repeat C-section in Black patients.

These and other policy weaknesses brought to light by AI raised institutional awareness that data from the best medical systems could be applied nationally and result in better care for all. This consensual activity, offering de-identified medical data for generalized analysis and benefit, they termed “scalable privilege.”

What might we learn in the short term from AI policy instruction? We would likely learn how to create a national system that is less expensive, less biased, impacted by local geographic social determinants, adversely impacted by pharmaceutical DTC ads, and subject to widespread PBM (Pharmacy Benefit Managers) fraud and abuse. Equally, AI could highlight new approaches to medical education, more effective and efficient drug discovery, real time outcome measurement, and methods of targeting and individualizing health and wellness plans.

So we end where we began. The history of Medicine has always involved a clash between the human need for compassion, understanding, and partnership, and the rigors of scientific discovery and advancing technology. At the interface of these two forces are human societies that struggle to remain forward looking and hopeful while managing complex human relations. It is a conflict in many ways to hold fear and worry at bay while imagining better futures for individuals, families, communities and societies.

The key question remains, “How do we make America and all Americans healthy?”  AI clearly has an important role to play in finally answering that question.


Show Buttons
Hide Buttons