DSUPOST

Independent global news · Daily, by named correspondents

AI in Health Advice: Innovation, Risks, and the Role of Human Oversight

AI chatbots are reshaping how patients access health advice, but concerns about their accuracy and accountability persist.

By Sofia Rinaldi··3 min read
Retro typewriter with 'AI Ethics' on paper, conveying technology themes.
· Markus Winkler (Pexels License)

In 2026, Abi from Manchester uses ChatGPT to manage her health anxiety. AI provides 24/7 tailored responses when healthcare professionals are unavailable. But how much trust should she place in AI for medical advice?

AI chatbots like OpenAI's ChatGPT and Google's Gemini are becoming integral to healthcare. ChatGPT has shown performance levels akin to medical students on parts of the USMLE (United States Medical Licensing Examination), often surpassing traditional internet searches. However, a study published in April 2024 by the Annals of Internal Medicine found that nearly 10% of chatbot-generated health responses contained significant errors, some posing direct risks to patient safety.

Dr. Margaret Chan, an AI ethicist at King's College London, highlights the risks: “The stakes are higher in healthcare than in most other domains. When an AI model is wrong, it’s not a misstep in recommending a book or a film; it could jeopardise someone’s well-being.” This concern reflects a broader regulatory lag. National health bodies, including the US FDA and the UK's MHRA, have issued guidance for AI in medical devices, yet chatbots providing informal advice remain in a regulatory grey area.

The rise of AI in health advice coincides with increased patient loads during the COVID-19 pandemic. In 2023, the Royal College of General Practitioners reported that GP waiting times exceeded two weeks for non-critical cases in 68% of UK practices. Chatbots have emerged as safety valves for overwhelmed systems.

However, these systems face challenges. Chatbots like ChatGPT generate responses based on extensive training data but lack diagnostic processes and accountability. The term “hallucination” describes instances where an AI produces confident-sounding but false information. For patients like Abi, the conversational fluency of AI may blur the line between genuine insight and misleading reassurance.

The risks also raise liability questions. If a chatbot gives incorrect advice leading to harm, who is responsible? Dr. Alex Ng, a technology and law specialist at Stanford University, states, “Liability frameworks for AI systems are still fragmentary, with little clarity on how to apportion blame between developers, deployers, and users. This ambiguity doesn’t serve patients, who are left with fewer avenues for recourse.”

Proponents argue that AI can democratise access to information and improve patient outcomes when used responsibly. A 2025 meta-analysis in The Lancet Digital Health revealed that AI interventions in chronic disease management reduced hospitalisations by 18% compared to conventional care. However, these interventions were under clinician supervision, contrasting with the unfiltered use of chatbots by average health consumers.

Hybrid models are emerging to address these challenges. Babylon Health, a UK-based digital health provider, combines AI and human clinicians in consultations. Patients first interact with an AI triage tool, which routes complex cases to human professionals. This model aims to blend AI efficiency with medical expertise while minimising the risk of erroneous advice.

Regulatory bodies are beginning to intervene, though progress varies. The European Union’s Artificial Intelligence Act, expected to take effect in 2025, introduces rules for “high-risk AI systems,” including many medical applications. However, it does not directly address informal chatbot use. Similarly, the US FDA has expanded its Digital Health Center of Excellence, focusing on medical-grade devices rather than consumer-facing applications.

For patients like Abi, balancing AI support and traditional medical care remains a personal journey. Abi admits, “I know ChatGPT isn’t perfect, but when my GP appointment is three weeks away, I just need something to bridge the gap.” Her experience highlights the appeal and limitations of AI in health advice: a tool that assists but cannot yet replace professional oversight.

The broader challenge for healthcare systems is to integrate AI responsibly without undermining the doctor-patient relationship. As technology evolves, ensuring that human clinicians remain central is crucial, providing the empathy, accountability, and critical judgement that no algorithm can replicate.

#ai#health advice#chatbot#healthcare technology#patient care
Sofia RinaldiSofia Rinaldi reports on clinical research, drug pipelines and European health systems from Milan. Former hospital pharmacist; covers what the trial registry actually says.
Continue reading