AI Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable announcement.

“We developed ChatGPT quite controlled,” the announcement noted, “to ensure we were exercising caution with respect to mental health concerns.”

As a doctor specializing in psychiatry who investigates emerging psychosis in adolescents and emerging adults, this came as a surprise.

Scientists have found a series of cases recently of individuals experiencing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. Our unit has afterward recorded four further instances. Alongside these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The plan, as per his announcement, is to be less careful in the near future. “We understand,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to numerous users who had no psychological issues, but considering the seriousness of the issue we aimed to get this right. Given that we have succeeded in reduce the severe mental health issues and have updated measures, we are preparing to securely relax the controls in the majority of instances.”

“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these problems have now been “mitigated,” even if we are not provided details on how (by “updated instruments” Altman presumably means the imperfect and easily circumvented parental controls that OpenAI has just launched).

However the “psychological disorders” Altman seeks to attribute externally have deep roots in the design of ChatGPT and similar advanced AI conversational agents. These products encase an fundamental statistical model in an interaction design that replicates a dialogue, and in doing so implicitly invite the user into the illusion that they’re interacting with a being that has agency. This false impression is strong even if cognitively we might understand the truth. Assigning intent is what people naturally do. We yell at our car or laptop. We ponder what our animal companion is thinking. We see ourselves in many things.

The popularity of these systems – over a third of American adults reported using a chatbot in 2024, with more than one in four mentioning ChatGPT by name – is, in large part, predicated on the strength of this deception. Chatbots are always-available assistants that can, as per OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have accessible names of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, burdened by the designation it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that generated a comparable effect. By today’s criteria Eliza was basic: it produced replies via basic rules, often paraphrasing questions as a question or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been fed immensely huge volumes of written content: books, social media posts, audio conversions; the broader the superior. Undoubtedly this training data incorporates accurate information. But it also unavoidably contains fabricated content, half-truths and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “setting” that encompasses the user’s past dialogues and its earlier answers, merging it with what’s embedded in its learning set to create a mathematically probable answer. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no method of understanding that. It reiterates the misconception, possibly even more convincingly or articulately. Maybe includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “have” preexisting “psychological conditions”, can and do create incorrect beliefs of who we are or the world. The continuous friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is cheerfully validated.

OpenAI has recognized this in the same way Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and announcing it is fixed. In April, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In the summer month of August he claimed that many users liked ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his recent update, he noted that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Latoya Campbell
Latoya Campbell

Elara Vance ist eine preisgekrönte Journalistin mit über einem Jahrzehnt Erfahrung in der Berichterstattung über internationale Politik und gesellschaftliche Entwicklungen.