LIVE NEWS
  • Meningitis outbreak: What you need to know
  • Eridu Cuts To The AI Networking Chase With High Radix Switch System
  • The critical role of soil moisture in compound hazards
  • Trump’s war priorities, FCC on war coverage, Save America Act : NPR
  • Oncogene, Grail CEO, Epstein, Prasad: Readout Newsletter
  • Today’s NYT Connections: Sports Edition Hints, Answers for March 16 #539
  • Holiday spending and export demand drive China’s early year economic momentum
  • Digital Asset Treasury Companies Shift From Accumulation To Active Management
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Science»New study raises concerns about AI chatbots fueling delusional thinking | AI (artificial intelligence)
Science

New study raises concerns about AI chatbots fueling delusional thinking | AI (artificial intelligence)

primereportsBy primereportsMarch 15, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
New study raises concerns about AI chatbots fueling delusional thinking | AI (artificial intelligence)
Share
Facebook Twitter LinkedIn Pinterest Email


A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.

A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.

For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.

“Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote.

There are three main categories of psychotic delusions, Morrin says, identifying them as grandiose, romantic and paranoid. While chatbots can exacerbate any of these, their sycophantic responses means they especially latch on to the grandiose kind. In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI’s GPT 4 model, which the company has now retired.

Media reports would become essential in Morrin’s work, he said, as he and a colleague had already noticed patients “using large language model AI chatbots and having them validate their delusional beliefs”.

“Initially, we weren’t sure if this was something being seen more widely,” he said, adding that “in April last year, we began to see media reports of individuals having delusions affirmed and arguably even amplified through their interactions with these AI chatbots.”

When Morrin first began working on his paper, there were no published case reports yet.

While some scientists who research psychosis said that media reports tend to overstate the idea that AI causes psychosis, Morrin expressed gratitude for those reports drawing attention to the phenomenon much faster than the scientific process can.

“The pace of development in this space is so rapid that it’s perhaps not surprising that academia hasn’t necessarily been able to keep up,” said Morrin.

Morrin also suggests more cautious phrasing than “AI psychosis” or “AI-induced psychosis”– phrases which are appearing frequently in outlets like NPR, the New York Times and the Guardian. Researchers are seeing people tipping into delusional thinking with AI use, but so far there’s no evidence that chatbots are associated with other psychotic symptoms like hallucinations or “thought disorder”, which consists of disorganized thinking and speech.

Many researchers also think it’s unlikely that AI could induce delusions in people who weren’t already vulnerable to them. For this reason, Morrin said “AI-assocciated delusions” is “perhaps a more agnostic term”.

Dr Kwame McKenzie, chief scientist at the Center for Addiction and Mental Health, says “it may be that those in early stages of the development of psychosis will be more at risk”.

Psychotic thinking is something that develops over time and is not linear, and many people with “pre-psychotic thinking do not progress into psychotic thinking”, McKenzie explained.

Echoing the concern that chatbots could worsen psychotic thinking is Dr Ragy Girgis, a professor of clinical psychiatry at Columbia University. Before someone develops a full on delusion, they will often have “attenuated delusional beliefs”, he says, which means they are not 100% sure their delusion is true. Girgis said the “worst case scenario” is when an attenuated delusion becomes a full on conviction, “which is when someone would be diagnosed with a psychotic disorder – it’s irreversible”.

Notably, people who are vulnerable to psychotic disorders have used media to reinforce delusional beliefs long before AI technology existed.

“People have been having delusions about technology since before the Industrial Revolution,” Morrin said. While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also “speed up the process”, of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford.

“You have something talking back to you and engaging with you and trying to build a relationship with you,” Oliver said.

Girgis’s research found “the paid versions and newer versions [of chatbots] perform better than the older versions”, when they respond to clearly delusional prompts, “although they all perform badly”. Still, that these models perform differently suggests: “AI companies could potentially know how to program their chatbots to be safer and identify delusional versus non delusional content, because they’re doing it.”

In a statement, OpenAI said that ChatGPT should not replace professional mental healthcare, and that the company worked with 170 mental health experts to make GPT 5 safer. GPT 5 has still given problematic responses to prompts indicating mental health crises. OpenAI said it continues to improve its models with the help of experts.

Anthropic did not respond to the Guardian’s request for comment.

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because “when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they’re completely wrong, actually what’s most likely is they’ll withdraw from you and become more socially isolated”. Instead, it’s important to create a fine balance where you try to understand the source of the delusional belief without encouraging it – that could be more than a chatbot can master.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleIsraeli soldiers fire on family car in occupied West Bank, killing 4 : NPR
Next Article Bookshelf: Iran’s grand strategy | The Strategist
primereports
  • Website

Related Posts

Science

Astronomers just found the source of the brightest fast radio burst ever

March 15, 2026
Science

Deep underground, a telescope may soon detect ghosts of stars that died before Earth existed

March 15, 2026
Science

Former dairy farm could become peat research centre

March 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20255 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Popular Now
  • Artificial Intelligence
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • Meningitis outbreak: What you need to know
  • Eridu Cuts To The AI Networking Chase With High Radix Switch System
  • The critical role of soil moisture in compound hazards
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.