The Safe Inclusion of Pediatric Data in AI-Driven Medical Research
The Safe Inclusion of Pediatric Data in AI-Driven Medical Research
AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
Ethical frameworks in medicine go at least as far back to the Hippocratic Oath in 400 BCE. With artificial intelligence (AI) now rapidly accelerating in health care settings — attested by the 500-plus AI devices approved by the Food and Drug Administration, mostly just in the past two years — novel frameworks are needed to ensure appropriate use of this new modality.
To that end, the international SPIRIT-AI and CONSORT-AI initiative has recently established guidelines for AI and machine learning in medical research. These frameworks, however, have not outlined specific considerations for pediatric populations. Children present uniquely complex data quandaries for AI, especially regarding consent and equity.
To address this gap, Stanford University’s Vijaytha Murali and Alyssa Burgart have led a perspective policy piece with Stanford biomedical data science instructor Roxana Daneshjou and professor of health policy Sherri Rose in the journal Nature Digital Medicine. Murali is a postdoctoral research affiliate in dermatology at Stanford University School of Medicine; Burgart is a clinical associate professor in anesthesiology, with a joint appointment in the Stanford Center for Biomedical Ethics, and the medical director of ethics for Lucile Packard Children's Hospital.
Murali, Burgart, and colleagues propose a new framework called ACCEPT-AI. In this Q&A, Murali and Burgart discuss the motivation behind ACCEPT-AI and how it can help ethically advance AI medical research involving pediatric patients.
Read the Full Q&A by Adam Hadhazy for HAI