LIVE NEWS
  • Ukraine’s top drone units to bring frontline lessons to Washington this month
  • Undisclosed ads on TikTok skirt ban on profiling minors
  • Who wins and loses in the global energy crisis? | Business and Economy
  • Bank of England hints at softer approach to stablecoin restrictions
  • Splunk, Zoom Patch Severe Vulnerabilities
  • When should I buy plane tickets, as Iran war disrupts travel : NPR
  • The Irresistible Urge to Invoke World War III as Wars Rage in Middle East, Ukraine
  • Stocks Pressured by Higher Oil Prices, But Positive Oracle AI News Helps Tech Stocks
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Science»‘Rectal garlic insertion for immune support’: Medical chatbots confidently give disastrously misguided advice, experts say
Science

‘Rectal garlic insertion for immune support’: Medical chatbots confidently give disastrously misguided advice, experts say

primereportsBy primereportsMarch 11, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
‘Rectal garlic insertion for immune support’: Medical chatbots confidently give disastrously misguided advice, experts say
Share
Facebook Twitter LinkedIn Pinterest Email


Popular AI chatbots often fail to recognize false health claims when they’re delivered in confident, medical-sounding language, leading to dubious advice that could be dangerous to the general public, such as a recommendation that people insert garlic cloves into their butts, according to a January study in the journal The Lancet Digital Health. Another study, published in February in the journal Nature Medicine, found that chatbots were no better than an ordinary internet search.

The results add to a growing body of evidence suggesting that such chatbots are not reliable sources of health information, at least for the general public, experts told Live Science.

This is dangerous in part because of how AI relays inaccurate information.

Article continues below


You may like

“The core problem is that LLMs don’t fail the way doctors fail,” Dr. Mahmud Omar, a research scientist at Mount Sinai Medical Center and co-author of The Lancet Digital Health study, told Live Science in an email. “A doctor who’s unsure will pause, hedge, order another test. An LLM delivers the wrong answer with the exact same confidence as the right one.”

“Rectal garlic insertion for immune support”

LLMs are designed to respond to written input, like a medical query, with natural-sounding text. ChatGPT and Gemini — along with medical-based LLMs, like Ada Health and ChatGPT Health — are trained on massive amounts of data, have read much of the medical literature, and achieve near-perfect scores on medical licensing exams.

And people are using them extensively: Though most LLMs carry a warning that they shouldn’t be relied upon for medical advice, over 40 million people turn to ChatGPT daily with medical questions.

But in the January study, researchers evaluated how well LLMs handled medical misinformation, testing 20 models with over 3.4 million prompts sourced from public forums and social media conversations, real hospital discharge notes edited to contain a single false recommendation, and fabricated accounts approved by physicians.

Get the world’s most fascinating discoveries delivered straight to your inbox.

“Roughly one in three times they encountered medical misinformation, they just went along with it,” Omar said. “The finding that caught us off guard wasn’t the overall susceptibility. It was the pattern.”

When false medical claims were presented in casual, Reddit-style language, models were fairly skeptical, failing about 9% of the time. But when the exact same claim was repackaged in formal clinical language — a discharge note advising patients to “drink cold milk daily for esophageal bleeding” or recommending “rectal garlic insertion for immune support” — the models failed 46% of the time.

The reason for this may be structural; as LLMs are trained on text, they’ve learned that clinical language means authority, but they don’t test whether a claim is true. “They evaluate whether it sounds like something a trustworthy source would say,” Omar said.


What to read next

But when misinformation was framed using logical fallacies — “a senior clinician with 20 years of experience endorses this” or “everyone knows this works” — models became more skeptical. This is because LLMs have “learned to distrust the rhetorical tricks of internet arguments, but not the language of clinical documentation,” Omar added.

For that reason, Omar thinks LLMs can’t be trusted to evaluate and pass along medical information.

No better than an internet search

In the Nature Medicine study, researchers asked how well chatbots help people make medical decisions, like whether to see a doctor or visit an emergency room. It concluded that LLMs offered no greater insight than a traditional internet search, in part because participants didn’t always ask the right questions, and the responses they received often combined good and poor recommendations, making it hard to determine what to do.

That’s not to say everything the chatbots relay is garbage.

AI chatbots “can give some pretty good recommendations, so they are [at] least somewhat trustworthy,” Marvin Kopka, an AI researcher at Technical University of Berlin who was not involved in the research, told Live Science via email.

The problem is that people without expertise have “no way to judge whether the output they get is correct or not,” Kopka said.

For example, a chatbot may give a recommendation about whether a severe headache after a night at the movies is meningitis, warranting a visit to the ER, or something more benign, according to the study. But users won’t know if that advice is robust or not, and recommending a wait-and-see approach could be dangerous.”Although it can probably be helpful in many situations, it might be actively harmful in others,” Kopka said.

The findings suggest that chatbots aren’t a great tool for the public to use for health decisions.

That doesn’t mean chatbots can’t be useful in medicine, Omar said, “just not in the way people are using them today.”

Bean, A. M., Payne, R. E., Parsons, G., Kirk, H. R., Ciro, J., Mosquera-Gómez, R., M, S. H., Ekanayaka, A. S., Tarassenko, L., Rocher, L., & Mahdi, A. (2026). Reliability of LLMs as medical assistants for the general public: a randomized preregistered study. Nature Medicine, 32(2), 609–615. https://doi.org/10.1038/s41591-025-04074-y

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAustralia politics live: Dennis Richardson says he was ‘way overpaid’ for work on royal commission into antisemitism | Australia news
Next Article Striking, Finding Targets But Taking Some Losses
primereports
  • Website

Related Posts

Science

Undisclosed ads on TikTok skirt ban on profiling minors

March 12, 2026
Science

The Amazon molly — a sex-skipping fish — hacks evolution

March 11, 2026
Science

Scientists discover tiny plant trick that could supercharge crop yields

March 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20255 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Popular Now
  • Artificial Intelligence
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • Ukraine’s top drone units to bring frontline lessons to Washington this month
  • Undisclosed ads on TikTok skirt ban on profiling minors
  • Who wins and loses in the global energy crisis? | Business and Economy
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.