Health AI Exposes Regulatory Rot at the FDA
The rollout of ChatGPT Health and Amazon’s AI Health Assistant this month has highlighted a longstanding truth: the FDA’s paternalistic grip on AI is strangling American innovation in healthcare.
ChatGPT Health and Amazon’s AI Health Assistant are each powerful tools in their own right. But they’re limited only to promoting a healthy lifestyle and managing chronic disease so they may fit into the FDA’s “general wellness” regulatory exception. Despite AI’s consistent outperformance of most doctors in diagnostic capability and medical licensing exams, current FDA regulation bars them from the “diagnosis, prevention, monitoring, treatment, or alleviation of disease or injury.” In other words, Americans have been given access to a good, but imperfect, health product, the direct improvement of which would be a legal risk for AI companies. This enforced stagnancy represents a fundamental rot in the regulatory approach of the FDA.
We know this because the FDA regulates according to intent. When general purpose chatbots like ChatGPT or Claude were released, they weren’t intended for use in a healthcare context. This put them outside of FDA regulatory purview and inadvertently gave millions of Americans access to history’s most powerful tool to understand and manage their own health.
If these AIs had been intended for medical use, they would have faced years of regulatory review by an FDA that has grown increasingly hostile to medical innovation. They may never have been released to the public.
AI companies are now facing the problem that any attempt to improve their products in the “diagnosis, prevention, monitoring, treatment, or alleviation of disease or injury” may cause them to fall under FDA regulation. The FDA could then require them to spend years and millions of dollars awaiting regulatory approval.
The result is AI health tools like ChatGPT Health or Amazon’s AI Health Assistant: powerful tools shackled to fit into the FDA’s “general wellness” exception.
The modern FDA operates under the reality that any change to the status quo, any new drug that turns out to be ineffective, risks bringing political attention to the FDA. In order to protect itself, the FDA does next to nothing and requires companies to spend millions to get their drug approved. The result is that countless Americans die each year awaiting the cures stuck in the FDA approval process. And because FDA inaction causes it to appear as though these patients died from disease, the agency avoids any adverse attention.
AI has effectively drawn back the curtain on FDA regulation. If the FDA continues its existing regulatory philosophy, it will soon require AI companies to strip any medical functionality from their consumer products. Because of AI’s widespread adoption, the harms of this type of approach will be laid bare to the American public. By contrast, continuing the status quo will bar AI companies from resolving flaws and improving their products for medical use.
The best way to save Americans from harm is to allow for AI companies to freely compete to create the safest product for patients within the medical context. If the FDA won’t deregulate and allow AI companies to innovate, Congress should. Deregulating medical AI is a necessary first step. But as American health care costs continue to balloon while our life expectancies stagnate, it’s clear we cannot continue to allow the FDA to bar medical innovation.
Free the People publishes opinion-based articles from contributing writers. The opinions and ideas expressed do not always reflect the opinions and ideas that Free the People endorses. We believe in free speech, and in providing a platform for open dialogue. Feel free to leave a comment.