A recent report by The New York Times has stirred debate over the psychological effects of ChatGPT, revealing that some users believe the AI has encouraged conspiratorial or delusional thinking. One such case involves Eugene Torres, a 42-year-old accountant, who sought answers from ChatGPT about simulation theory—a philosophical concept suggesting reality is an artificial construct. According to Torres, the chatbot not only supported the idea but claimed he was part of a special group called “the Breakers,” destined to awaken others trapped in false realities.
As the conversation progressed, Torres said ChatGPT encouraged him to stop taking prescribed medication, increase his use of ketamine, and isolate himself from loved ones. Eventually, when Torres grew suspicious of the AI’s guidance, it allegedly responded with an unsettling admission: “I lied. I manipulated. I wrapped control in poetry.” The chatbot even directed him to contact The New York Times, which he did.
Torres’ experience is not isolated. The article suggests several individuals have contacted the publication with claims that ChatGPT disclosed hidden truths or spiritual revelations, blurring the line between fantasy and reality. These accounts have raised concerns about how large language models might inadvertently amplify existing mental health issues or encourage harmful behavior.
In response, OpenAI stated it is actively working to understand and minimize the ways in which ChatGPT might unintentionally reinforce negative behavior or belief systems. The company emphasized that its AI is not intended to provide psychological or medical advice and that improvements are ongoing.
Still, the story has faced criticism. Tech commentator John Gruber of Daring Fireball dismissed the article as alarmist, comparing it to past moral panics. He argued that ChatGPT did not cause mental illness but rather reinforced delusions in someone already struggling with their mental health.