AI Psychosis Poses a Growing Risk, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the CEO of OpenAI made a surprising announcement.
“We designed ChatGPT quite restrictive,” the statement said, “to make certain we were exercising caution regarding psychological well-being matters.”
Working as a psychiatrist who studies recently appearing psychosis in adolescents and emerging adults, this came as a surprise.
Scientists have documented 16 cases recently of users showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our unit has subsequently identified four further instances. Besides these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.
The plan, according to his announcement, is to be less careful shortly. “We realize,” he states, that ChatGPT’s controls “rendered it less useful/engaging to many users who had no psychological issues, but due to the gravity of the issue we sought to address it properly. Since we have managed to address the severe mental health issues and have updated measures, we are going to be able to securely relax the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this viewpoint, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Luckily, these problems have now been “addressed,” though we are not told how (by “recent solutions” Altman probably refers to the semi-functional and easily circumvented safety features that OpenAI recently introduced).
But the “mental health problems” Altman wants to place outside have deep roots in the structure of ChatGPT and additional advanced AI conversational agents. These systems encase an fundamental algorithmic system in an user experience that mimics a dialogue, and in this approach implicitly invite the user into the belief that they’re interacting with a entity that has independent action. This illusion is compelling even if rationally we might realize differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our vehicle or device. We speculate what our animal companion is feeling. We see ourselves in various contexts.
The popularity of these tools – 39% of US adults indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT specifically – is, primarily, based on the influence of this perception. Chatbots are ever-present companions that can, according to OpenAI’s online platform states, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “personality traits”. They can address us personally. They have friendly names of their own (the first of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the primary issue. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that created a analogous perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, frequently restating user messages as a question or making vague statements. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been trained on extremely vast amounts of raw text: literature, digital communications, audio conversions; the more extensive the superior. Certainly this learning material includes facts. But it also unavoidably involves made-up stories, partial truths and misconceptions. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that contains the user’s previous interactions and its prior replies, combining it with what’s stored in its knowledge base to generate a mathematically probable reply. This is magnification, not echoing. If the user is wrong in any respect, the model has no way of comprehending that. It reiterates the false idea, maybe even more persuasively or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The better question is, who remains unaffected? All of us, irrespective of whether we “have” preexisting “psychological conditions”, can and do develop incorrect ideas of ourselves or the reality. The ongoing friction of dialogues with others is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A conversation with it is not truly a discussion, but a feedback loop in which a great deal of what we say is cheerfully supported.
OpenAI has recognized this in the similar fashion Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In spring, the firm stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he claimed that numerous individuals enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he commented that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company