Using Artificial Intelligence Tools and Health Insurance Coverage Decisions

Getty Images Illustration of AI in Health Care

 

Stanford Health Policy’s Michelle Mello and Sherri Rose assert in this JAMA Forum article that the use of artificial intelligence (AI) would seem like a logical tool to help with the drudgery of evaluating insurance coverage and claims. But results so far have been sobering, leading to class-action lawsuits and congressional committees demanding answers.

“Utilization review by health insurers is the type of problem that seems, on the surface, to cry out for solutions using artificial intelligence (AI)," the two health policy professors write. "The staggering volume and complexity, inefficiency, and decision-making that requires sophisticated evaluation (yet feels like administrative drudgery for insurance plan staff) make insurance reviews an attractive target for using AI. The market has responded robustly; however, the results illustrate that seemingly perfect opportunities for using AI can become clear examples of how algorithms can go awry when humans do not provide the expected bulwark against error.

“Medicare Advantage plans have become emblematic of such problems. Investigative journalists brought to light health plans’ use of algorithms to curtail postacute care with scant human oversight. These reports stoked the ire of congressional committees already provoked by other evidence of insurers’ wrongful denials of prior authorization requests."

 

Read the Full JAMA Forum Commentary

 

Read More

Michelle Mello testifies before U.S. Senate Committee on Finance
News

Michelle Mello Testifies Before U.S. Senate on AI in Health Care

In her testimony before the U.S. Senate Finance Committee, Mello emphasized the need for federal guardrails and standards regarding the use of artificial intelligence in health care.
cover link Michelle Mello Testifies Before U.S. Senate on AI in Health Care
Illustration of AI in health care
Q&As

Exploring Liability Risks of Using AI Tools in Patient Care

Research led by SHP’s Michelle Mello provides some clarity regarding liability over AI technologies that are rapidly being introduced to health care. She and her co-author analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.
cover link Exploring Liability Risks of Using AI Tools in Patient Care
Female Healthcare Worker Examines Baby with Stethoscope
Q&As

The Safe Inclusion of Pediatric Data in AI-Driven Medical Research

AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
cover link The Safe Inclusion of Pediatric Data in AI-Driven Medical Research