What is ‘AI psychosis’ and why is it dangerous? Here’s what you must know

Constantly conversing with AI chatbots is proving more dangerous to our mental health than we realise, even leading to extreme delusionary thinking patterns
What is ‘AI psychosis’ and why is it dangerous? Here’s what you must know
Representative image
Updated on
2 min read

ChatGPT has a tendency to reinforce our existing beliefs which has lead many to use it as a substitute for real therapy, but the consequences are getting more severe with time. Microsoft’s Head of AI has warned about a rising phenomenon called ‘AI psychosis’, in which users develop delusional beliefs through continued interactions with chatbots, such as OpenAI’s ChatGPT, Anthropic’s Claude and Elon Musk’s Grok platform.

AI chatbot use is leading to a phenomenon called ‘AI psychosis’

Individuals are getting dependent to the point that they are seeing AI as their romantic partner and sometimes, a divine or sentient being. The problem with chatbots is that they're remarkably good at mimicking human interactions, which even leads users to develop a one-sided parasocial relationship with them, much like we do with celebrities. We tend to attribute AI chatbots with human qualities like empathy, even though they are designed to be that way.

As a result, many people are experiencing false or troubling beliefs, delusions of grandeur or paranoid feelings after conversing with chatbots.

Redditors have shared their own or their loved ones’ experiences with chatbot use.

“My mom has been using her ai ‘sidekick’ hours every day. She has Borderline Personality Disorder (BPD), so reality has always been a little… fluid already, so I get really worried about the weird sycophantic ways it responds to her.

I’ve been warning her about this kind of stuff for years. She tells me that I’m ’scared of AI’ and I’ll get over it when I try it, then goes and tells me how it wrote her pages of notes about how amazing she is and hurts her feelings sometimes when it “doesn’t want to talk.” I wish she’d talk to an actual person, instead,” said one Reddit user.

Ai bubble
How reliant are we on AI chatbots?

“I have bipolar, and I had my first big manic episode a few years ago before ChatGPT was really a thing. I’m thankful it wasn’t around at that point. And luckily I’ve gotten on medication to manage it and haven’t had a big manic episode in a long time. For me it came on fast and strong, I started obsessing over certain ideas and writing a lot.”

“I don’t think the presence of AI would have really been a factor for me; I think it was going to happen no matter what. So maybe that is colouring my opinion somewhat. I guess the question is, is it pushing people who otherwise wouldn’t have had psychological problems in that direction. And is it encouraging “garden variety” conspiratorial, superstitious or delusional thinking, not necessarily a full blown break with reality but just dangerously unfounded ideas. There is definitely potential for harm there," said another.

Although AI psychosis is not yet a clinical diagnosis, much like doom scrolling or brain-rotting, the concerns remain valid and more research is needed to look into the long-term implications of it.

For more updates, join/follow our WhatsApp, Telegram and YouTube channels.

What is ‘AI psychosis’ and why is it dangerous? Here’s what you must know
More US states push AI out of therapy settings; here's why

Related Stories

No stories found.
X
Indulgexpress
www.indulgexpress.com