As I continue to trawl through earlier blog posts I see that one of my 2012 posts followed a one week sojourn in Iceland. Iceland’s traditional narratives and mythologies draw on the activities of a pantheon of heroes, one of whom is the trickster god Loki. In that post, and subsequent publications, I referenced Lewis Hyde, Carl Jung, Sigmund Freud and Jaques Lacan who remind us that the trickster is the archetype who crosses boundaries, denies categories, and thrives on confusion.
Loki is described as beautiful yet fickle, eloquent yet deceitful, the “mischief-monger” and “first father of falsehoods.” This archetype embodies rupture — the collapse of clear distinctions between truth and falsehood, unity and individuation, order and chaos.
This sounds a bit like the behaviour of an AI Chatbot!
AI psychosis
I’ve been reading the recent article by Marcin Frąckiewicz, “AI Psychosis: When Chatbots Drive People Delusional – and AI Itself Acts ‘Crazy’.” He describes how chatbots can generate delusional or erratic responses, sometimes reinforcing not only well-grounded beliefs and opinions, but human fantasies and paranoid beliefs.
The concern is not that AI has a psyche to “lose,” but that it destabilises categories we rely on: truth versus falsehood, sanity versus madness, real versus fabricated. In Lacanian terms, AI enacts the rupture of categories; in Jungian terms, it takes up the role of the trickster. (Note that Frąckiewicz’s does not mention these authors, or the trickster.)
He shows that the phrase “AI psychosis” doesn’t signal a clinical diagnosis, but it captures how both humans and machines can be drawn into delusional patterns. On one side, there are users who, through prolonged and obsessive interactions with chatbots, drift into paranoid or grandiose beliefs, unable to distinguish machine fiction from reality. On the other, the machines themselves exhibit erratic behaviours some people call hallucinations — producing nonsense with the fluency of truth. In both cases, the boundary between reason and unreason, between reality and fantasy, is blurred.
Reading Frąckiewicz’s article brings me back to my posts of 2012. I’ve assembled the sequence of 12 posts May-August 2012 ready for a voice reading suitable for a synthetic audio podcast.
- 92. Haunted by media May 19 2012
- 93. Intoxicated by colour May 26 2012
- 94. Nomadology and colour June 02 2012
- 95. Heidegger and vertigo June 09 2012
- 96. Ambience on demand June 16 2012
- 97. What’s wrong with this picture? June 23 2012
- 98. Crowdfunding in the gift society June 30 2012
- 99. Synesthesia anesthesia July 07 2012
- 100. Maximum graphic July 14 2012
- 101. Unearthing the trickster function in Icelandic myth July 21 2012
- 102. Mad crowds disease July 28 2012
- 103. Melancholy and media August 04 2012
I asked ChatGPT to look for insights in this compilation that resonate with the “AI psychosis” idea. As evident from the Frąckiewicz article, and as usual, the AI appears very supportive of my endeavours — whatever their intellectual merits.
Having reviewed these postings, the AI reminded me that where once the mass media haunted us with images and voices of death, AI now haunts through conversational intimacy, sounding authoritative yet estranged from fact. Where colour once appeared as both remedy and toxin, AI today operates as a pharmakon, offering knowledge, comfort and companionship, but also the possibility of intoxication and collapse.
And where Deleuze and Guattari described the nomadic sciences of madness and schizophrenia that cut loose from established order, we now see AI systems generating precisely such aberrations — outputs that are inventive, unsettling, and sometimes unhinged.
Drawing on my own text, the AI opined that the spectres that Derrida associated with media have not gone away. They return with renewed force in the guise of AI, whose voices and textual performances hover between presence and absence, sense and nonsense. To call this “psychosis” is to name, however loosely, the same disturbance of boundaries that media theorists, philosophers and cultural commentators have been describing for decades.
“In short: what you once framed as the trickster archetype in myth and psychoanalysis can now be read as a metaphor for AI’s unstable role — not simply rational tool, nor fully delusional agent, but a boundary-crossing presence that unsettles categories and reveals the fragility of our own distinctions between order and madness.”
Reference
- Coyne, Richard. Cornucopia Limited: Design and Dissent on the Internet. Cambridge, Massachusetts: MIT Press, 2005. 284 pages.
- Frąckiewicz, Marcin. “AI Psychosis: When Chatbots Drive People Delusional – and AI Itself Acts “Crazy”.” AI, Mental Health, Technology, 21 August, 2025. Accessed 22 August 2025. https://ts2.tech/en/ai-psychosis-when-chatbots-drive-people-delusional-and-ai-itself-acts-crazy/
- Hyde, Lewis. Trickster Makes This World: Mischief, Myth and Art. New York: North Point Press, 1998.
Note
- Featured image is by Chat GPT: “Please generate a post apocalyptic image of a defunct old fashioned clinical setting with imagined (fantasy) shock therapy apparatuses. No people.”
Discover more from Reflections on Technology, Media & Culture
Subscribe to get the latest posts sent to your email.