Scientists Argue for More FDA Oversight of Healthcare AI Tools

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

Artificial intelligence (AI) is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behavior can shift in unpredictable ways once they’re in use.

In the new paper, Celi and his colleagues argue that the FDA's current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

"This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centered, risk-aware, and continuously adaptive regulatory approach - one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities," the authors say.

Abulibdeh R, Celi LA, Sejdić E.
The illusion of safety: A report to the FDA on AI healthcare product approvals.
PLOS Digit Health. 2025 Jun 5;4(6):e0000866. doi: 10.1371/journal.pdig.0000866

Most Popular Now

Do Fitness Apps do More Harm than Good?

A study published in the British Journal of Health Psychology reveals the negative behavioral and psychological consequences of commercial fitness apps reported by users on social media. These impacts may...

AI Tool Beats Humans at Detecting Parasi…

Scientists at ARUP Laboratories have developed an artificial intelligence (AI) tool that detects intestinal parasites in stool samples more quickly and accurately than traditional methods, potentially transforming how labs diagnose...

Making Cancer Vaccines More Personal

In a new study, University of Arizona researchers created a model for cutaneous squamous cell carcinoma, a type of skin cancer, and identified two mutated tumor proteins, or neoantigens, that...

AI can Better Predict Future Risk for He…

A landmark study led by University' experts has shown that artificial intelligence can better predict how doctors should treat patients following a heart attack. The study, conducted by an international...

A New AI Model Improves the Prediction o…

Breast cancer is the most commonly diagnosed form of cancer in the world among women, with more than 2.3 million cases a year, and continues to be one of the...

AI System Finds Crucial Clues for Diagno…

Doctors often must make critical decisions in minutes, relying on incomplete information. While electronic health records contain vast amounts of patient data, much of it remains difficult to interpret quickly...