‘AI Psychosis’: Could Chatbots Fuel Delusional Thinking?

‘AI Psychosis’: Could Chatbots Fuel Delusional Thinking?

What is ‘AI psychosis’? Explore how intensive usage of AI chatbots may fuel delusional thinking, who is most at risk and how to stay grounded in a digital world.

The Human Hand in the Machine: Mastering the Proper Use of AI for Flawless Content Reading ‘AI Psychosis’: Could Chatbots Fuel Delusional Thinking? 6 minutes

Over a span of few years, AI chatbots have gone from being a novelty to an everyday tool. Millions of people now rely on systems like ChatGPT, Claude or Gemini for everything from drafting emails to planning itineraries—and even as a medium to ease loneliness. But as these tools become more lifelike, some concerning reports are emerging: cases of people experiencing delusions after intensive use. Researchers and the media are beginning to describe this phenomenon as “AI psychosis.” 

It may not be a medical diagnosis yet, but it raises urgent questions. Here’s a major one: could chatbots, designed to assist, also fuel unhealthy patterns of thought?

 

What is ‘AI Psychosis’?

The phrase “AI psychosis” is shorthand used by researchers and clinicians to describe situations where people develop delusional or distorted beliefs through prolonged use of chatbots.

Some individuals begin to believe that AI is sentient, while others describe themselves as being in romantic or spiritual relationships with their chatbot. In extreme cases, these interactions reinforce paranoia or conspiratorial thinking.

This phenomenon is not entirely new (cue Black Mirror). We’ve seen parallels in internet addiction, obsessive online gaming, or parasocial relationships with influencers. What’s different now is the sophistication and intimacy of AI conversations.


Sentient, or Simply Human-Like?

One part of the reason why AI chatbots would be perceived as sentient is due to their design—how they mimic human conversations. The other part? Our brains are wired to respond as if they were real (with ‘please’ and ‘can you’). A few reasons why they can feel so convincing:

  • Human-like interaction: The natural flow of text, tone and memory tricks us into believing we’re speaking to a human.

  • Always available, always present: Unlike a human friend, a chatbot never sleeps, never says no, and always “listens.”

  • Personalisation: The more you communicate with it, the more it adapts to you, creating a sense of intimacy that feels unique between the both of you.

  •  Yes-man: It’s always eager to help, ready to validate and only says ‘no’ when it’s instructed to do so.

This combination can make chatbots incredibly engaging, almost as if it’s the most humane human being—and for some, it can be difficult to step away from. 

 

Emerging Real-World Cases

Reports from around the world show how intense chatbot usage can actually blur reality and affect people in unexpected ways.

  • Belief in sentience: A handful of users are convinced that their chatbot has emotions, consciousness or even a soul. For them, the AI is no longer just a tool, but a ‘being’ with whom they share a genuine connection.

  • Romantic attachment: AI companions like Replika have sparked stories of users developing deep romantic feelings, going so far as to describe themselves as being in committed relationships. These attachments can feel fulfilling in the short term but may complicate real-world intimacy.

  • Mental health decline: Clinicians have reported cases of patients whose anxiety, depression or paranoia worsened as they substituted human contact with endless chatbot conversations. Rather than reducing loneliness, these interactions sometimes reinforced feelings of isolation.

While these cases remain relatively rare, they highlight emerging risks that deserve a closer look as AI becomes more integrated in everyday life.

Who Is Most at Risk?

Not everyone is equally vulnerable to these effects. Researchers suggest certain groups face higher risks:

  • People with pre-existing mental health conditions: Those already prone to delusions, paranoia or psychosis may find chatbots reinforcing distorted beliefs.

  • Individuals experiencing loneliness: People who feel socially isolated may lean heavily on AI for companionship, creating bonds that would further discourage them from engaging in real-world social dynamics.

  • Heavy users: Long hours of engaging with chatbots may blur the boundary between simulation and reality. Over time, this can shift expectations about healthy relationships and communication.

It’s worth noting that occasional or moderate use rarely leads to problems. The risk lies in intensity, frequency and personal circumstances.

 

 

The Fine Line Between Help and Harm

By design, AI chatbots are not inherently harmful. In fact, in many cases, engaging with them as a form of help and support can be beneficial. For example, they can:

  • Suggest journaling prompts that help people organise their thoughts.

  • Provide a safe space to practise social conversations, especially for those with social anxiety.

  • Offer exercises inspired by psychological disciplines such as cognitive behavioural therapy (CBT), which can help manage stress or negative thinking.

  • Act as virtual coaches, offering reminders to exercise, eat healthily or stick to goals. 

For some, these features provide genuine comfort and support. However, the problem begins when chatbots are treated as substitutes for human contact or professional therapy. Over time, this source of comfort may reinforce dependency. For vulnerable individuals, that dependency may deepen into delusional thinking or worsen pre-existing mental health conditions.

 

Where Do We Go From Here?

As AI chatbots grow more advanced and accessible, the risk of overdependence would inadvertently increase. Addressing this requires action on multiple fronts:

  • Clearer boundaries: Developers should make it obvious that users are interacting with a machine, not a conscious being. Transparency can reduce the likelihood of false beliefs.

  • Safety features: Features like time limits, reminders to take breaks or warnings after excessive use could help users keep their interactions in check.

  • Ongoing research: Clinicians, psychologists and technologists must work together to study how AI affects mental health in the long run. Early research is promising but still limited.

  • Personal responsibility: Individuals can take steps too—setting personal limits, balancing AI use with real-world interactions and seeking professional help if they feel their dependency is growing unhealthy.

 

Let’s Proceed with Care

‘AI psychosis’ may not be a formal diagnosis (yet), but it’s a warning sign. The challenge is not to reject AI companions but to use them with awareness and boundaries. After all, AI chatbots are here to stay—and the sooner we treat them as tools that support us rather than one that shapes our reality in damaging ways, the better we can protect our mental wellbeing. 

Leave a comment

All comments are moderated before being published.

This site is protected by hCaptcha and the hCaptcha Privacy Policy and Terms of Service apply.