ChatGPT and Physicians’ Malpractice Risk
ChatGPT and Physicians’ Malpractice Risk
In this JAMA Forum perspective, SHP's Michelle Mello, professor of health policy and of law, and Neel Guha, a Stanford Law School student and PhD candidate in computer science, write that medical advice from AI chatbots is not yet highly accurate, so physicians should only use these systems to supplement more traditional forms of medical guidance.
"ChatGPT has exploded into the national consciousness. The potential for large language models (LLMs) such as ChatGPT, Bard, and many others to support or replace humans in a range of areas is now clear—and medical decisions are no exception. This has sharpened a perennial medicolegal question: How can physicians incorporate promising new technologies into their practice without increasing liability risk?
"The answer lawyers often give is that physicians should use LLMs to augment, not replace, their professional judgment. Physicians might be forgiven for finding such advice unhelpful. No competent physician would blindly follow model output. But what exactly does it mean to augment clinical judgment in a legally defensible fashion?
"The courts have provided no guidance, but the question reprises earlier decisions concerning clinical practice guidelines. Recognizing that reputable clinical practice guidelines represented evidence-based practice, courts and some state legislatures allowed a physician’s adherence to the guidelines to constitute exculpatory evidence in malpractice lawsuits, and some courts let plaintiffs offer a physician’s failure to follow guidelines as evidence of negligence. The key issue was whether the guideline was applicable to the patient and situation at issue. Expert witnesses testified as to whether a reasonable physician would have followed (or departed from) the guideline in the circumstances, and about the reliability of the guideline itself."