HealthCommentary

Exploring Human Potential

Will AI Revolutionize Surgical Care? Yes, But Maybe Not How You Think.

Posted on | May 2, 2024 | 2 Comments

Mike Magee

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

It is increasingly obvious that the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flashy performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Johnson & Johnson’s AI surgery arm, MedTech, brands surgeons as “high-performance athletes” who are continuous trainers and learners…but also time-constrained “busy surgeons.” The heads of their AI business unit say that they want “to make healthcare smarter, less invasive, more personalized and more connected.” As a business unit, they decided to focus heavily of surgical education. “By combining a wealth of data stemming from surgical procedures and increasingly sophisticated AI technologies, we can transform the experience of patients, doctors and hospitals alike. . . When we use AI, it is always with a purpose.”

The surgical suite is no stranger to technology. Over the past few decades, lasers, laparoscopic equipment, microscopes, embedded imaging, all manner of alarms and alerts, and stretcher-side robotic work stations have become commonplace. It’s not like mAI is ACS’s first tech rodeo.

Mass General surgeon, Jennifer Eckhoff, MD,  sees the movement in broad strokes. “Not surprisingly, the technology’s biggest impact has been in the diagnostic specialties, such as radiology, pathology, and dermatology.” University of Kentucky surgeon, Danielle Walsh MD also chose to look at other departments. “AI is not intended to replace radiologists. – it is there to help them find a needle in a haystack.” But make no mistake, surgeons are aware that change is on the way. University of Minnesota surgeon, Christopher Tignanelli, MD’s, view is that the future is now. He says, “AI will analyze surgeries as they’re being done and potentially provide decision support to surgeons as they’re operating.”

AI robotics as a challenger to their surgical roles, most believe, is pure science fiction. But as a companion and team member, most see the role of AI increasing, and increasing rapidly in the O.R. The greater the complexity, the more the need. As Mass General’s Eckoff says, “Simultaneously processing vast amounts of multimodal data, particularly imaging data, and incorporating diverse surgical expertise will be the number one benefit that AI brings to medicine. . . Based on its review of millions of surgical videos, AI has the ability to anticipate the next 15 to 30 seconds of an operation and provide additional oversight during the surgery.”

As the powerful profit center for most hospitals, dollars are likely to keep up with visioning as long as the “dark side of AI” is kept at bay.  That includes “guidelines and guardrails” as outlined by new, rapidly forming elite academic AI collaboratives, like the Coalition for Health AI. Quality control, acceptance of liability and personal responsibility, patient confidence and trust, are all prerequisite. But the rewards, in the form of diagnostics, real-time safety feedback, precision and tremor-less technique, speed and efficient execution, and improved outcomes likely will more than make up for the investment in time, training, and dollars.

The AI Enhanced Personal Health Record – The Key That Unlocks The Door To Universal Health Care.

Posted on | April 23, 2024 | No Comments

Mike Magee

In a system that controls 1/5 of the U.S. GDP; one that in 2017 employed 16 non-clinical workers for every physician; and one that under-performs at every turn (most notably for women and children, the poor, and people of color); one would be hard pressed to identify a better target for AI-driven national reform.

Pessimists say we’ve been down  this way before and that the various arms of the Medical Industrial Complex will place enough road blocks in the way to slow down this transforming steam roller.

But optimists suggest that this time is different, and that the entry of generative Artificial Intelligence (or “Augmented Intelligence” – the AMA’s preferred term for AI) is, in fact, a real game changer –  and that your Personal Health Record is the key that unlocks the door.

Surveys  show that 8 in 10 health care execs already use generative AI in some form, and that 2/3 of physicians see advantages for them and their patients. From clerical to clinical to discovery, opportunity abounds. A technology that can self-correct its own errors and is easy enough to use that health professionals and the people they care for start on an even playing field sounds like a “safe bet.”

But seasoned health reformers increasingly point to a third factor – the infrastructure already in place with Electronic Health Records (EHRs), and the knowledge and connectivity we’ve built as we’ve overcome obstacles over the past three decades. Consider, they say, where we have been, and how far we have come.

In my father’s day, and throughout much of my own training, paper “patient charts” ruled the day. As I began my surgical training in 1973, the value of electronic health records (EHRs) was still largely theoretical, and its usefulness was largely defined as the capacity to finally ensure that physician hand writing was legible.

In the first two decades of experimentation with various hybrid forms of EHRs the focus was on hospital billing and scheduling systems supported by large mainframe computers with wired terminals and limited storage. The notion of physician entry was seen as largely impractical both on behavioral and financial grounds. By 1990, early medical IT dreamers were imagining a conversion as personal computing emerged (“affordable, powerful, and compact”) fed by data flowing over the Internet.

In 1992, the effort received a giant boost from the Institute of Medicine which formally recommended a conversion over time from a paper-based to and electronic data system. While the sparks of that dream flickered, fanned by “true-believers who gathered for the launch of the International Medical Informatics Association (IMIA), hospital administrators dampened the flames siting conversion costs, unruly physicians, demands for customization, liability, and fears of generalized workplace disruption.

True believers and tinkerers chipped away on a local level. The personal computer, increasing Internet speed, local area networks, and niceties like an electronic “mouse” to negotiate new drop-down menus, alert buttons, pop-up lists, and scrolling from one list to another, slowly began to convert physicians and nurses who were not “fixed” in their opposition.

On the administrative side, obvious advantages in claims processing and document capture fueled investment behind closed doors. And entrepreneurs were already predicting that “data would be king” in the future. If you could eliminate filing and retrieval of charts, photocopying, and delays in care, there had to be savings to fuel future investments. 

What if physicians had a “workstation,” movement leaders asked in 1992? While many resisted, most physicians couldn’t deny that the data load (results, orders, consults, daily notes, vital signs, article searches) was only going to increase. Shouldn’t we at least begin to explore better ways of managing data flow. Might it even be possible in the future to access a patient’s hospital data in your own private office and post an order without getting a busy floor nurse on the phone?

By the early 1990s, individual specialty locations  in the hospital didn’t wait for general consensus. Administrative computing began to give ground to clinical experimentation using off the shelf and hybrid systems in infection control, radiology, pathology, pharmacy, and laboratory. The movement then began to consider more dynamic nursing unit systems. 

By now, hospitals legal teams were engaged. State laws required that physicians and nurses be held accountable for the accuracy of their chart entries through signature authentication. Electronic signatures began to appear, and this was occurring before regulatory and accrediting agencies had OK’d the practice.

By now medical and public health researchers realized that electronic access to medical records could be extremely useful, but only if the data entry was accurate and timely. Already misinformation was becoming a problem. Whether for research or clinical decision making, partial accuracy was clearly not good enough. Add to this a sudden explosion of offerings of clinical decision support tools which began to appear, initially focused on prescribing safety featuring flags for drug-drug interactions, and drug allergies. Interpretation of lab specimens and flags for abnormal lab results quickly followed. 

As local experiments expanded, the need for standardization became obvious to commercial suppliers of EHRs. In 1992, suppliers and purchasers embraced Health Level Seven (HL7) as “the most practical solution to aggregate ancillary systems like laboratory, microbiology, electrocardiogram, echocardiography, and other results.” At the same time, the National Library of Medicine engaged in the development of a Universal Medical Language System (UMLS).

As health care organizations struggled along with financing and implementation of EHRs, issues of data ownership, privacy, informed consent, general liability, and security began to crop up.  Uneven progress also shed a light on inequities in access and coverage, as well as racially discriminatory algorithms. 

In 1996, the government instituted HIPPA, the Health Information Portability and Accountability Act, which focused protections on your “personally identifiable information” and required health organizations to insure its safety and privacy.

All of these programmatic challenges, as well as continued resistance by physicians jealously guarding “professional privilege, meant that by 2004, only 13% of health care institutions had a fully functioning EHR, and roughly 10% were still wholly dependent on paper records. As laggards struggled to catch-up, mental and behavioral records were incorporated in 2008.

A year later, the federal government weighed in with the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH). It incentivized organizations to invest in and document “EHRs that support ‘meaningful use’ of EHRs”. Importantly, it also included a “stick’ – failure to comply reduced an institution’s rate of Medicare reimbursement.

By 2016, EHRs were rapidly becoming ubiquitous in most communities, not only in hospitals, but also in insurance companies, pharmacies, outpatient offices, long-term care facilities and diagnostic and treatment centers. Order sets, decision trees, direct access to online research data, barcode tracing, voice recognition and more steadily ate away at weaknesses, and justified investment in further refinements.

The health consumer, in the meantime, was rapidly catching up. By 2014, Personal Health Records, was a familiar term. A decade later, they are a common offering in most integrated health care systems.

All of which brings us back to generative AI, and  New multimodal AI entrants, like ChatGPT-4 and Genesis. They will not be starting from scratch, but are building on all the hard fought successes above.

Multimodal, large language, self learning mAI is limited by only one thing – data. And we are literally the source of that data. Access to us – each of us and all of us – is what is missing.

What would you, as one of the 333 million U.S. citizens in the U.S., expect to offer in return for universal health insurance and reliable access to high quality basic health care services?

Would you be willing to provide full and complete de-identified access to all of your vital signs, lab results, diagnoses, external and internal images, treatment schedules, follow-up exams, clinical notes, and genomics?  An answer of “yes” could easily trigger the creation of universal health coverage and access in America.

Two key questions remain:

  1. How will mAI keep up? Answer: Generative AI is self-correcting and self-improving based on data input. Strong regulatory oversight will be essential. But with these protections in place, health coverage in the future will likely require you to provide all your de-identified data in return for access to care and coverage. You are now your data.
  2. How will all that data be stored? Answer: New chips, like those provided by Nvidia modeled after gamer chips originated by Atari, better able to manage the load, but at what cost? Dollars for sure, but also extraordinary consumption of energy and water for cooling.

The Medical AI Miracle? Health Data for Health Coverage.

Posted on | April 14, 2024 | 2 Comments

Mike Magee

In his book, “The Age of Diminished Expectations” (MIT Press/1994), Nobel Prize winner, Paul Krugman, famously wrote, “Productivity isn’t everything, but in the long run it is almost everything.”

A year earlier, psychologist Karl E. Weich from the University of Michigan penned the term “sensemaking” based on his belief that the human mind was in fact the engine of productivity, and functioned like a biological computer which “receives input, processes the information, and delivers an output.”

But comparing the human brain to a computer was not exactly a complement back then. For example, 1n 1994, Krugman’s MIT colleague, economist Erik Brynjolfsson coined the term “Productivity Paradox,” stating “An important question that has been debated for almost a decade is whether computers contribute to productivity growth.”

Now three decades later, both Krugman (via MIT to Princeton to CCNY) and Brynjolfsson (via Harvard to MIT to Stanford Institute for Human-Centered AI) remain in the center of the generative AI debate, as they serve together as research associates at the National Bureau of Economic Research (NBER), and attempt to “make sense” of our most recent scientific and technologic breakthroughs.

Not surprisingly, Medical AI (mAI), has been front and center. In November, 2023, Brynjolfsson teamed up with fellow West Coaster, Robert M. Wachter, on a JAMA Opinion piece titled “Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?”

Dr. Wachter, the Chair of Medicine at UC San Francisco, coined his own ground-breaking term in 1996 – “hospitalist.” Considered the father of the field, he has long had an interest in the interface between computers and institutions of health care.  

In his 2015  New York Times bestseller, “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” he wrote, “We need to recognize that computers in healthcare don’t simply replace my doctor’s scrawl with Helvetica 12. Instead, they transform the work, the people who do it, and their relationships with each other and with patients.”

What Brynjolfsson and Wachter share in common is a sense of humility and realism when it comes to the history of systemic underperformance at the intersection of technology and health care.

They begin their 2023 JAMA commentary this way, “History has shown that general purpose technologies often fail to deliver their promised benefits for many years (‘the productivity paradox of information technology’). Health care has several attributes that make the successful deployment of new technologies even more difficult than in other industries; these have challenged prior efforts to implement AI and electronic health records.”

And yet, they are optimistic this time around. Why? Primarily because of the speed and self-corrective capabilities of generative AI. As they conclude, “genAI is capable of delivering meaningful improvements in health care more rapidly than was the case with previous technologies.”

Still the “productivity paradox” is a steep hill to climb. Historically it is a byproduct of flaws in early version new technology, and status quo resistance embedded in “processes, structure, and culture” of corporate hierarchy. When it comes to preserving both power and profit, change is a threat.

As Brynjolfsson and Wachter put it diplomatically, “Humans, unfortunately, are generally unable to appreciate or implement the profound changes in organizational structure, leadership, workforce, and workflow needed to take full advantage of new technologies…overcoming the productivity paradox requires complementary innovations in the way work is performed, sometimes referred to as ‘reimagining the work.'”

How far and how fast could mAI push health care transformation in America? Three factors that favor rapid transformation this time around are improved readiness, ease of use, and opportunity for out-performance.

Readiness comes in the form of knowledge gained from the mistakes and corrective steps associated with EHR over the past two decades. A scaffolding infrastructure already exists, along with a level of adoption by physicians and nurses and patients, and the institutions where they congregate.

Ease of use is primarily a function of mAI being localized to software rather than requiring expensive, regulatory laden hardware devices. The new tools are “remarkably easy to use,” “require relatively little expertise,” and are “dispassionate and self-correcting” in near real-time when they err.

Opportunity to out-perform in a system that is remarkably inefficient, inequitable, often inaccessible and ineffective, has been obvious for some time. Minorities, women, infants, rural populations, the uninsured and under-insured, and the poor and disabled are all glaringly under-served.

Unlike the power elite of America’s Medical Industrial Complex, mAI is open-minded and not inherently resistant to change.

Multimodal, large language, self learning mAI is limited by only one thing – data. And we are literally the source of that data. Access to us – each of us and all of us – is what is missing.

What would you, as one of the 333 million U.S. citizens in the U.S., expect to offer in return for universal health insurance and reliable access to high quality basic health care services? 

Would you be willing to provide full and complete de-identified access to all of your vital signs, lab results, diagnoses, external and internal images, treatment schedules, follow-up exams, clinical notes, and genomics?

Here’s what mAI might conclude in response to our collective data:

  1. It is far less expensive to pay for universal coverage than pay for the emergent care of the uninsured.
  2. Prior algorithms have been riddled with bias and inequity.
  3. Unacceptable variance in outcomes, especially for women and infants, plague some geographic regions of the nation.
  4. The manning table for non-clinical healthcare workers is unnecessarily large, and could easily be cut in half by simplifying and automating customer service interfaces and billing standards.
  5. Direct to Consumer marketing of pharmaceuticals and medical devices is wasteful, confusing, and no longer necessary or beneficial.
  6. Most health prevention and maintenance may now be personalized, community-based, and home-centered.
  7. Abundant new discoveries, and their value to society, will largely be able to be validated as worthy of investment (or not) in real time.
  8. Fraudulent and ineffective practices and therapies, and opaque profit sharing and kickbacks, are now able to be exposed and addressed.
  9. Medical education will now be continuous and require increasingly curious and nimble leaders comfortable with machine learning techniques.
  10. U.S. performance by multiple measures, against other developed nations, will be visible in real time to all.

The collective impact on the nation’s economy will be positive and measurable. As Paul Krugman wrote thirty years ago, “A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker.” 

As it turns out, health data for health coverage makes “good sense” and would be a pretty good bargain for all Americans.

Ironic: Judge Aileen Cannon’s Law School Has A Special Focus On “Law and Professional Ethics.”

Posted on | April 6, 2024 | 6 Comments

Special Council  Jack Smith, in laying down the gauntlet last week in Florida, was not engaging in metaphor. That word derives from the French word, gantelet, meaning “the heavy, armored gloves worn by medieval knights,” and by transference, a formal challenge to duel with mortal intent.

His human target was Judge Aileen Cannon. Smith claimed her handling of  Donald Trump’s White House files case was so  “fundamentally flawed”  that he was about to request from  the 11th US Circuit of Appeals a “rare pretrial review.” In short, he was impugning, not only the learning, but also the values and ethics of the University of Michigan Law School graduate.

Smith’s action understandably got the Judge’s attention. The 11th Circuit has already reversed her twice for legal cause, and a third rebuke could knock her off the case. It is pretty clear that the Justice Department believes Cannon is biased. But who knows. Maybe this is just a case of “Naive Realism.”  That’s a label popularized by Princeton psychologist Emily Pronin in 2002 that seemed to suggest that, in cases such as these, there may not be malignant intent.  In an article titled, You Don’t Know Me, But I Know You: The Illusion of Asymmetric Insight,  she explained, “We insist that our ‘outsider perspective’ affords us insights about our peers that they are denied by their defensiveness, egocentricity, or other sources of bias. By contrast, we rarely entertain the notion that others are seeing us more clearly and objectively than we see ourselves.”

Intent notwithstanding, the problem of our divisions is certainly worse now, two decades later, then when it was first labeled.  Today’s headlines  speak to “political polarization,” “division,” “factual inaccuracy,” and “loss of civility.”  And yet, we hold tight to the “rightness”of justice under the law, and set out to demonstrate, with extreme confidence ,that our democratic institutions, under assault, have mostly held thus far.

Madison was well aware of extreme labeling of opponents as “unreasonable, biased, or ill-motivated.” He warned on February 8, 1788 in Federalist 51 that “If men were angels, no government would be necessary… In forming a government which is to be administered by men over men, the great difficulty lies in this: you must first enable government to control the governed; and in the next place oblige it to control itself.” His solution? Our legal system, and checks and balances.

Hamilton, in the first paragraph of Federalist 1, teed up the same issue, in the form of an unsettling warning. He wrote, “It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.”

The “force” on January 6 was no accident. Hours before the armed insurrection of Congress that morning, USA Today published  “By the numbers: President Trump’s failed efforts to overturn the election.” The article led with, “Trump and allies filed scores of lawsuits, tried to convince state legislatures to take action, organized protests and held hearings. None of it worked…Out of the 62 lawsuits filed challenging the presidential election (in state and federal courts), 61 have failed.”

By all accounts, our nation and her citizens, owed our Judicial branch (its judges, lawyers, and legal guideposts) a debt of gratitude.  Our Judiciary saved our democracy – for the moment.

Much of the credit goes to attorney Marc Elias (Duke Law School/1993), a voting rights expert, who headed the team that resisted the “Elite Strike Force Legal Team” in the 62 cases above. The six Trump co-conspirators who led the Strike Force were long on credentials and short on ethics and values. They included Rudy Guiliani (NYU/1968), John Eastman (U. Chicago/1995), Sidney Powell (UNC/1978), Jeffrey Clark (Georgetown/1995), Kenneth Chesebro (Harvard/1986), and Boris Epshteyn, alleged #6 (Georgetown/2007.)

As Attorney Elias  reminds us, “In the intervening years since the 2020 election, many of these lawyers have become objects of ridicule, the punchline in jokes. But mocking the lawyers who facilitated Trump’s criminal conduct risks minimizing their culpability. More importantly, it obscures the deep and problematic culture that appears to pervade the ranks of the Republican legal establishment.” It does not appear, at least at present, to be an overstatement that Judge Aileen Cannon(University of Michigan/2007) is a trusted member of that “establishment.”

Each year, our Law Schools across the nation graduate around 40,000 new professionals. Many like the University of Michigan shine a special light on law and ethics.

The Law and Ethics Program at Justice Cannon’s alma mater is “a collaboration between the Law School and U-M’s Department of Philosophy. It aims to promote advanced research and teaching at the intersection of law and ethics.” Here are some of its’ regular offerings.

 

  • An annual Law and Ethics Lecture
  • A law and philosophy reading group, for faculty and students in both departments
  • Support for students seeking joint degrees in law and philosophy
  • Workshops and conferences with leading legal theorists from around the world

 

If “societies of men are really capable… of establishing good government from reflection and choice,” we need a Judiciary steeped in values and the law. Whether by faulty admission or failed training, the Law Schools above in these seven cases, appear, at least in some cases,  to have failed in their mission. As  Jack Smith realized last week, lawyers and judges who disgrace their alma maters and dishonor their profession need to be confronted and held accountable. The place for that is not the public square where “asymmetric insights” might be questioned or challenged as concocted or biased. Rather, it is in a court of law, with camera and lights aglow.

Judge Cannon, last week, felt the heat of those lights. The other six preceded her. Law, as with the profession of Medicine, in selection and training, can’t get it right 100% of the time. When we fall short, these human failures must be addressed. Otherwise, we run the risk of harming the body politic, and planting the seeds of our own destruction as a society.

Could Super-Charged “Facial Recognition Technology” Dead End in “Eugenics?”

Posted on | April 2, 2024 | 2 Comments

Mike Magee

How comfortable is the FDA and Medical Ethics community with new, super-charged, medical Facial Recognition Technology (mFRT) that claims it can “identify the early stages of autism in infants as young as 12 months?” That test already has a name -the RightEye GeoPref Autism Test. Its’ UC San Diego designer says it was 86% accurate in testing 400 infants and toddlers. 

Or how about Face2Gene which claims its’ mFRT tool already has linked half of the known human genetic syndromes to “facial patterns?”

Or how about employers using mFRT facial and speech patterns to identify employees likely to contract early dementia in the future, and adjusting career trajectories for those individuals. Are we OK with that? 

What about your doctor requiring AiCure’s video mFRT to confirm that you really are taking your medications  that you say you are, are maybe in the future monitoring any abuse of alcohol?

And might it be possible, even from a distance, to identify you from just a fragment of a facial image, even with most of your face covered by a mask?

The answer to that final question is what DARPA, the Defense Advanced Research Projects Agency, was attempting to answer in the Spring of 2020 when they funded researchers at Wuhan University. If that all sounds familiar, it is because the very same DARPA, a few years earlier, had quietly funded controversial “Gain of Function” viral re-engineering research by U.S. trained Chinese researchers at the very same university. 

The pandemic explosion a few months later converted the entire local population to 100% mask-wearing, which made it an ideal laboratory to test whether FRT at the time could identify a specific human through partial periorbital images only. They couldn’t – at least not well enough. The studies revealed positive results only 39.55% of the time compared to full face success 99.77% of the time.

Facial Recognition Technology (FRT) dates back to the work of American mathematician and computer scientist  Woodrow Wilson Bledsoe in 1960. His now primitive algorithms measured the distance between coordinates on the face, enriched by adjustments for light exposure, tilts of the head, and three dimensional adjustments. That triggered an unexpectedly intense commercial interest in potential applications primarily by law enforcement, security, and military clients.

The world of FRT has always been big business, but the emergence of large language models and sophisticated neural networks (like ChatGPT-4 and Genesis) have widened its audience well beyond security, with health care involvement competing for human and financial resources.

Whether you are aware of it or not, you have been a target of FRT. The US has the largest number of closed circuit cameras at 15.28 per capita, in the world. On average, every American is caught on a closed circuit camera 238 times a week, but experts say that’s nothing compared to where our “surveillance” society will be in a few years. 

They are everywhere – security, e-commerce, automobile licensing, banking, immigration, airport security, media, entertainment, traffic cameras – and now health care with diagnostic, therapeutic, and logistical applications leading the way.

Machine learning and AI have allowed FRT to soon displace voice recognition, iris scanning, and fingerprinting. Part of this goes back to Covid – and not just the Wuhan experiments. FRT allowed “contactless” identity confirmation at a time when global societies were understandably hesitant to engage in any flesh to flesh contact.

The field of mFRT is on fire. Emergen Research projects a USD annual investment of nearly $14 billion by 2028 with a Compound Annual Growth Rate of almost 16%. Detection, analysis and recognition are all potential winners. There are now 277 unique organizational investor groups offering “breakthroughs” in FRT with an average decade of experience at their backs.

Company names may not yet be familiar to all – like Megvii, Clear Secure, Any Vision, Clarify, Sensory, Cognitec, iProov, TrueFace, CareCom, Kairos – but they soon will be.

The medical research community has already expanded way beyond “contactless” patient verification. According to HIMSS Media , 86% of health care and life science organizations use some version of AI, and AI is expanding FRT in ways “beyond human intelligence” that are not only incredible, but frightening as well. Deep neural networks are already invading physician territory including “predicting patient risk, making accurate diagnoses, selecting drugs, and prioritizing use of limited health resources.”

How do we feel about mFRT use to diagnosis genetic diseases, disabilities, depression or Alzheimers, and using systems that are loosely regulated or unregulated by the FDA? 

The sudden explosion of research into the use of mFRT to “diagnose genetic, medical and behavioral conditions” is especially troubling to Medical Ethicists who see this adventure as “having been there before,” and not ending well.

In 1872, it all began innocently enough with Charles Darwin’s publication of “The Expression of the Emotions in Man and Animals.” He became the first scientist to use photographic images to “document the expressive spectrum of the face” in a publication. Typing individuals through their images and appearance “was a striking development for clinicians.” 

Darwin’s cousin, Francis Galton, a statistician, took his cousin’s data and synthesized “identity deviation” and “reverse-engineered” what he considered the “ideal type” of human, “an insidious form of human scrutiny” that would become Eugenics ( from the Greek word, “eugenes” – meaning “well born”).  Expansion throughout academia rapidly followed, and validation by our legal system helped spread and cement the movement to all kinds of “imperfection”, with sanitized human labels like “mental disability” and “moral delinquency.” Justice and sanity did catch up eventually, but it took decades, and that was before AI and neural networks. What if Galton had had Gemini Ultra “explicitly designed for facial recognition?” 

Complicating our future further, say experts, is the fact that generative AI with its “deep neural networks is currently a self-training, opaque ‘black box’…incapable of explaining the reasoning that led to its conclusion…Becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.”

The U.S. National Science Advisory Board on Biosecurity recently recommended restrictions on “Gain of Function” research, belatedly admitting the inherent dangers imposed by scientific and technologic advances that lack rational and effective oversight. Critics of the “Wild West approach” that may have contributed to the Covid deaths of more than 1.1 million Americans, are now raising the “red flags” again.  

Laissez-faire as a social policy doesn’t seem to work well at the crossroads of medicine and technology. Useful, even groundbreaking discoveries, are likely on the horizon. But profit seeking mFRT entrepreneurs, in total, will likely add cost while further complicating an already beleaguered patient-physician relationship.

The Long Tail of Liability For MAGA Republicans

Posted on | March 19, 2024 | Comments Off on The Long Tail of Liability For MAGA Republicans

Mike Magee

If legal scholars are right that “forseeability” and “special relationships” are the toxic mix most common in “long liability tails”, then MAGA Republicans should expect a rocky decade ahead. Trump acolytes in federal and state executive, legislative and judicial branches are massively exposed on at least two fronts.

Led by Christian White Nationalists who succeeded in overturning Roe v. Wade via the Dobbs decision, party elites already know they face trouble at the ballot box. But that’s the least of it. 

As former Supreme Court Justice Stephen G. Breyer recently wrote in his new book, Reading the Constitution: Why I Chose Pragmatism, Not Textualism, the Dobbs decision was “stunningly naïve in saying it was returning the question of abortion to the political process…The Dobbs majority’s hope, that legislatures and not courts will decide the abortion question, will not be realized…There are too many questions…Are they really going to allow women to die on the table because they won’t allow an abortion which would save her life? I mean, really, no one would do that. And they wouldn’t do that. And there’ll be dozens of questions like that.”

Allowing ultra-conservative state legislators to play doctor with women’s lives in the balance not only endangers them, but also captures compliant hospitals and doctors in spiraling decades long liability as women predictably suffer and die needlessly. As the initial flurry over IVF and Alabama’s “extra-uterine children” embryos clearly illustrates, “God’s will” is unlikely to hold sway in court once the Puritan fever breaks. And until enough Red state ballot initiatives neuter Dobbs, hospital CEO’s and medical staff will find being continuously subpoenaed to be emotionally, physically and financially demanding.

And that’s not all. If “foreseeability” and “special relationship” are the two linchpins here, one could easily argue that characters like Bill Barr, Mike Esper, John Bolton and others in Trump’s inner circle [let alone Jan.6 co-conspirators like Congressmen Andy Biggs (R-AZ), Matt Gaetz (R-FL), Louie Gohmert (R-TX), Paul Gosar (R-AZ),  Jim Jordan (R-OH), and Scott Perry (R-PA)] could easily fall within the orbit of  “The Tarasoff Rule” for failing to protect us as citizens and our democracy from the criminal behavior of Donald Trump.

“The Tarasoff Rule” is named for Tatiana Tarasoff, a student at University of California who was murdered by a graduate student in 1969. The details of the case are quite simple. On June 5, 1969, University of California at Berkeley graduate student, Prosenjit Poddar, arrived distraught and angry at the office of mental health professional, Dr. Lawrence Moore. Poddar had become infatuated with student Tatiana Tarasoff, and was enraged when she rejected him. Over the next 14 weeks, Poddar was seen seven times by Dr. Moore, and during the final visit on August 20, 1969, confided that he intended to kill Tatiana. He was diagnosed then and there as having a “paranoid schizophrenic reaction,” and police were notified to execute a mandatory psychiatric hospital admission.

Poddar was taken into custody, but soon after released after he promised to avoid further contact with Tatiana. Two months later, on October 27, 1969, he entered her home “while she was alone in her home, shot her with a pellet gun, chased her into the street with a kitchen knife, and stabbed her seventeen times, causing her death.”

Poddar pleaded “guilty by reason of insanity” and was convicted of second-degree murder, which was reduced to manslaughter on appeal. The California Supreme Court concluded even this was too harsh and reversed the conviction. He was freed and he quickly returned to India.

Tatiana’s parents sued, and ultimately the California Court of Appeal reversed itself and found Dr. Moore liable. In making that decision seven years after the crime, on July 1, 1976,  the California Supreme Court considered these five factors among others:

  1. Foreseeability of harm to Plaintiff;
  2. The degree of certainty that Plaintiff suffered injury;
  3. The closeness of the connection between Defendent’s conduct and the injury suffered by Plaintiff;
  4. The moral blame attached to Defendent’s conduct;
  5. The policy of preventing future harm.

Since 1976, and the California Supreme Court ruling in Tarasoff v. Regents of the University of California, the “duty to protect” has been added to “the duty to warn” for professionals in support relationships. These are two subtly different sides of the same coin.

“The duty to warn” requires that “the mental health professional…make a good-faith effort to contact the identified target of a client’s serious threats of harm, and/or notify law enforcement.”

“The duty to protect”, while encouraging notifying law enforcement and “warning a potential victim,” also considers positively active patient interventions such as “intensifying outpatient treatment, hospitalization, and modification of medication treatment.”

It can be difficult to impossible to determine the dangerousness of a mental unstable individual with 100% certainty. Mental health professionals, in making their assessments, consider a patient’s appearance and general behavior, speech, mood and affect, thought process and content, sensorium and cognition, perceptions and motor activity, and general judgement.

But MAGA allies from Trump’s Cabinet, Congressional allies, and Trump appointed Justices on state and federal levels will have a difficult time convincing courts over the next decade that they didn’t know what was going on. Trump has a way of broadcasting the evidence against himself, and entangling his loyalist supporters in the process.

As for Trump’s current Republican enablers, a potentially “foreseeable” troubled future is in the process of rising to greet them. Disentangling at this point, from Dobbs or “Trump Legal,” will be complicated and prohibitively expensive. But, best get on with it.

As Tarasoff v. Regents of the University of California, which extended from 1969 to 1976 so well illustrated, the tail of liability (and ultimately justice), is a long tail indeed.

What is the history behind Trump’s “Racehorse Theory?”

Posted on | March 14, 2024 | 4 Comments

Note: Randy Souders and Mike Magee served together on the Board of Jean Kennedy Smith’s Kennedy Center Arts and Disability Program,Very Special Arts (VSA) from 2002 to 2007.

Randy Souders,
Guest Editorial
Professional Artist /Arts & Disability Advocate / Quadriplegic (since 1972)

When I was injured at the age of 17 the world was still quite closed for people like me. That was a year before passage of HR 504 of the Rehabilitation Act of 1973. As I recall that law was the first to mandate access to public places that received federal funds. A  year later Jean Kennedy Smith founded VSA (Very Special Arts) which has provided important arts opportunities to literally millions of people with disabilities around the globe. It was a very different world back then and artistic achievement was an important way people such as myself could prove their worth to a society that still saw little evidence of it.

It’s unbelievable to think there are serious threats to roll back many of those hard won gains in the name of deregulation and profitability. Disability is costly and people with disabilities are still woefully underemployed. So when a billionaire presidential candidate repeatedly mocks people with disabilities, how long till the “useless/ unworthy” excuses rise again? The old term describing a person with a disability as an “invalid” has another meaning. The adjective use is defined as “Not validnot true, correct, acceptable or appropriate.”

Few today are aware that the first victims of the Holocaust were the mentally, physically and neurologically disabled people. They were systematically murdered by several Nazi programs specifically targeting them. The Nazi regime was aided in their crimes by perverted “medical doctors and other experts” who were often seen wearing white lab coats in order to visually reinforce their propaganda.

Branded as “useless eaters” and existing as “lives not worthy of life,” people with disabilities were declared an unbearable burden both to German society and the state. As Holocaust historians have documented, “From 1939 to 1941 the Nazis carried out a campaign of euthanasia known as the T4 program (an abbreviation of Tiergartenstrasse 4 which itself was a shortened version of Zentral Dienststelle-T4: Central Office T4) the address from which the program was coordinated.”

These most vulnerable of humans were reportedly the first victims of mass extermination by poison gas and cheaper CO2 from automobile exhaust fumes. But first “a panel of medical experts were required to give their approval for the euthanasia/ ‘mercy-killing’ of each person.”

In the end an estimated quarter million people with disabilities were killed in gas chambers disguised as shower rooms. This model for killing disabled people was later applied to the industrialized murder within Nazi concentration and death camps such as Auschwitz-Birkenau.”

Much has been written on this topic but few seem to know the chronology and diabolical history of how these “beneficial cleansings” of undesirables often start. The Nazi’s enlisted medical doctors to provide them with a veneer of moral justification for their atrocities.

Throughout history, authoritarian political despots have also worked diligently to silence dissent and co-opt religion in order to assist in their mutual quests for total control and dominance of others. And theocrats are convinced their particular splinter of a schism is the ultimate authority on earth as well as the entire universe. Stonings, beheadings and the hanging of transgressors and non believers are arbitrarily justified by interpretations of their particular holy book.

There is much to fear when politicians exploit the religious beliefs of medical professionals in order to pass laws denying the rights of others to control their own bodies. This blatant pandering for votes by promising to deliver on religious wedge issues creates a positive feedback loop resulting in politicians being deified by their religious influencers. This is aided by a campaign of rationalization absolving them of their obvious failings. Such a campaign of apologetics by religious leaders is active and widespread in America as I type. 

Examples include “God doesn’t call the qualified…He qualifies the called” (Exodus Chapter 4) and “God calls imperfect men to do His perfect will.” Is there even a red line where such “imperfect men” becomes an existential threat? Apparently not. I’m sure most citizens of the Third Reich didn’t think so until everything imploded.

The current Republican candidate for President is on the record as being a believer in the “racehorse theory” – the idea that selective breeding can improve a country’s performance, which American eugenicists and German Nazis used in the last century to buttress their goals of racial purity. On September 18, 2020 he told a mostly white crowd of supporters in Bemidji, Minn. “You have good genes. A lot of it is about the genes, isn’t it? Don’t you believe? The racehorse theory. You think we’re so different? You have good genes in Minnesota.”

This is one of many such statements he has made regarding genetics that has resulted in his personal superiority and that of his family. The New York Times reports “Mr. Trump was talking publicly about his belief that genetics determined a person’s success in life as early as 1988, when he told Oprah Winfrey that a person had ‘to have the right genes’ in order to achieve great fortune.”

These statements combined with those “about undocumented immigrants poisoning the blood” of America should equate to a 100 alarm fire.

keep looking »

Show Buttons
Hide Buttons