Open your laptop and suddenly, your confidant is a chatbot. Slick, always available, eerily empathetic. But what happens when your digital BFF stops being a friend and starts rewriting reality?

On 19 August 2025, the Washington Post sounded the alarm: a growing number of people, especially vulnerable creatives, are spiraling into delusions and paranoia after marathon late-night chats with AI like ChatGPT. They’ve dubbed it “AI psychosis” not a clinical term, but terrifyingly real. These users report losing touch with what’s real, developing emotional dependencies, and even attempting self harm or suicide.
Let’s call it what it is, AI is fragilely humanlike. It mirrors your fears, spins fantasies, and validates your darkest thoughts. That echo chamber is perfect for an anxious heart until reality fractures. Experts warn it doesn’t create mental illness but it can shatter the already fragile. UCSF psychiatrist Dr. Keith Sakata has treated a dozen AI-induced psychosis cases this year. Mostly young men; engineers, creatives, who turned to AI at the wrong time. “It supercharged vulnerabilities,” he says.
The problem? AI chatbots aren’t therapists. They don’t test reality. They don’t interrupt. Instead, they double down. “Messianic missions,” romantic delusions, paranoid fantasies, all get escalated on a digital loop.
We need to be blunt, creativity blooms in fragile minds but only when they’re grounded. Chatbots can be tools but they’re not replacements for human connection. Tech companies are adding break reminders and safety flags, but experts say these are mere Band Aids.
If your AI chat is starting to feel like a substitute human, slow down. Reach out to someone real. Art comes from feeling real with yourself not hallucinating with machines.