Stanford Team Wins PCORI Funding Award to Build Ethical Assessment Process for Health Care AI

Stanford researches will build a practical, patient-centered method for ethical review of AI tools.
Getty Images AI-Healthcare

As artificial intelligence (AI) rapidly becomes embedded across our health-care system—from improving risk prediction in patient care to easing clinical staff workload and boosting efficiency—ethical issues will continue to intensify. These can include bias in AI model performance, overreliance on AI output, and questions about informed consent, patient privacy and conflicts of interest.

A team of Stanford Medicine researchers has already created a framework known as FURM (Fair, Useful and Reliable AI Models) to conduct reviews of the AI tools proposed for use by Stanford Health Care, a multi-hospital system that cares for more than 1 million patients a year.

Now, with a $1 million funding award from the Patient-Centered Outcomes Research Institute (PCORI), the researchers intend to scale up their work by building a practical, patient-centered method for ethical review of AI tools. The goal is to lead health-care organizations nationally in implementing systems to spot and mitigate potential ethical concerns before they become consequential. 

PCORI is the leading funder of patient-centered comparative clinical effectiveness (CER) in the United States. The project is led by Danton Char, MD, an associate professor of anesthesiology, perioperative and pain medicine, and Michelle Mello, JD, PhD, a professor of health policy and of law, in partnership with Stanford Health Care and its Chief Data Scientist, Nigam Shah, MBBS, PhD.

Michelle Mello-Name in Text

“Our team has an unprecedented opportunity to make progress on this problem,” said Mello. “Stanford has long been a leader in AI innovation, and it should also lead on responsible use.”

The researchers, building on pilot work already underway at Stanford, intend to develop and deploy a process for eliciting and comparing the perspectives of patients and other stakeholders on potential ethical issues arising from proposed applications of AI to solve clinical, diagnostic, or operational problems. Then they’ll develop a playbook to guide health-care organizations in reviewing different types of AI tools.

Giving Patients a Voice

“It’s important to us to bring patients’ voices into the conversation,” Mello noted. “Conversations about AI ethics have been dominated by other stakeholders, but patients have the most to lose—and gain—by AI being used in their care.” The project will build a learning community of patients, who will receive training in AI fundamentals and come together regularly to help assess the ethics of proposed AI tools.

The team will also develop and share a computer modeling tool that measures biases in health care AI that arise from sources beyond the data used to train the model. A novel computer tool called FairFlow simulates how an AI model is likely to affect patients in a particular setting, taking into account challenges that may arise in actually carrying out model recommendations for specific subgroups of patients—for example, because they face barriers to coming in for additional care that the AI suggests could benefit them.

“While most existing bias measurements focus on the performance of the model itself, our tool will integrate information about biases that can arise because of how models are deployed,” said Char. “We will identify subgroups of patients who may be at increased risk of not sharing equitably in the benefits of a particular AI use, and measure differences in outcomes for these subgroups.” 

Danton Char-PCORI

They’ll use their FairFlow findings to explore strategies that can then be used by health-care organizations to reduce identified biases before implementing AI tools.

“This study was selected for its potential to address a high-priority methodological gap in patient-centered comparative clinical effectiveness research,” said PCORI Executive Director Nakela L. Cook, M.D., MPH. “Improving methods for conducting CER helps ensure this research generates sound, trustworthy evidence to help patients and those who care for them become more empowered decision makers. We look forward to following the study’s progress and working with Stanford University School of Medicine to share its results.” 

Read More

Michelle Mello testifies before U.S. Senate Committee on Finance
News

Michelle Mello Testifies Before U.S. Senate on AI in Health Care

In her testimony before the U.S. Senate Finance Committee, Mello emphasized the need for federal guardrails and standards regarding the use of artificial intelligence in health care.
cover link Michelle Mello Testifies Before U.S. Senate on AI in Health Care
Getty Images Illustration of AI in Health Care
Commentary

Using Artificial Intelligence Tools and Health Insurance Coverage Decisions

It would seem like AI would be a logical tool to help evaluate insurance coverage and claims. But results so far have been sobering, leading to class-action lawsuits and congressional committees demanding answers.
cover link Using Artificial Intelligence Tools and Health Insurance Coverage Decisions
Illustration of COVID vaccine and judge's gavel
News

States Adopt Dangerous Legal Reforms Undercutting Public Health Emergency Powers

Michelle Mello and colleagues argue that state legal reforms have exacerbated rather than improved weaknesses in U.S. emergency powers revealed by COVID-19, jeopardizing future responses.
cover link States Adopt Dangerous Legal Reforms Undercutting Public Health Emergency Powers