AI eschatophobia

It seems that everyone is talking about (or with) ChatGPT. The platform’s convincing conversational acuity and ability to synthesise disparate conceptual threads provides a vivid demonstration of AI’s potential. I’ve now read several accounts online where scholars, programmers, writers, musicians and artists use ChatGPT or similar as a creative companion to explore ideas — comparable to using search on an Internet browser — but cleverer. My own foray into the area over several blog posts provides a demonstration, e.g. Chatting with an AI about urban inexistence.

In many social conversations about ChatGPT, I’ve so far detected two main groups: (1) those like me who are fascinated by the technology, its procedures, applications, risks and possibilities (I include within that group informed critics such as Noam Chomsky and Erik Larson); and (2) those preoccupied with a concern that the tech signals the end of something — mostly meaningful work and the autonomy and the independence that work provides.

AI eschatology

Eschatology is the study of narratives about an ending, usually of an era, or perhaps of life on earth as we know it. I turned to ChatGPT for some definitions:

“The term ‘eschatophobia’ is derived from the Greek word ‘eschaton,’ meaning ‘last,’ ‘final,’ or ‘end.’ It typically refers to the fear of the end of the world or the end of time, but it can also refer to a fear of personal extinction or the extinction of one’s species.”

Diminished capacity to learn well

For the AI eschatophobe in the (my) world of learning and teaching (i.e. schools, colleges and universities) impressive performance in NLP (natural language processing) augurs serious revision to methods of assessment. AIs can write student essays and reports, or at least correct and improve them. There’s the alluring prospect of an indefatigable AI personal tutor, but writing and thinking skills among those who grow to depend on such support will diminish.

Add to that anxiety the prospect that fake content generated and circulated by AI programs will proliferate. There’s enough misleading, fatuous and incendiary content out there already.

Deskilling and job loss

For the AI eschatophobe, platforms such as ChatGPT amplify the threats to the jobs and livelihoods of white collar workers: those who write, review and interpret texts, emails, memos, reports, summaries, Q&As, and who deliver their expertise in the form of words, speech, scripts, and writing in general. The few that stay in those jobs will have to re-skill, to work with, train, monitor and fine tune the next generation of NLPs.

Dehumanised interaction

AI eschatophobia is bolstered further by the unfavourable reception of much automation. Think of inefficient and annoying automated service providers, phone answering services, chatbots and checkouts where our inquiries are triaged through AI-style interfaces before we get to a human being, if ever. There are ample cases from fiction and actual experience that rehearse how wrong things can become at the mercy of a machine. (For a recent parody of the challenge see Bigbug (2022) a French science fiction black comedy film by Jean-Pierre Jeunet.)

Humans in decline

Eschatophobia may also take on the anxiety of a putative “singularity,” where AI takes over from human intelligence. Systems become interwoven, complicated and unaccountable, especially critical when things go wrong.

The AI eschatophobe may be anxious that the specialness of being human is diminished. AI that seems to function as well as ChatGPT, continues the trajectory towards the human being’s position as one of a series of cogs in a machine — the global capitalist machine, rife with inequalities, exploitation and where labour is disposable and replaceable. See post: What’s wrong with accelerationism.

From an eschato-philosophical perspective it may turn out that the cartesian rationalists were right. There’s a kind of philosophical anxiety here, at least for the phenomenologist. AI that works, suggests that the world is made of numbers after all. It’s reducible. Perhaps the ability of humans to communicate and come up with smart ideas is not so special.

With pervasive AI, our interactions with one another and with the natural world will be increasingly mediated by technology, diminishing sociability, care and meaningful interactions with environment. Already, AI permeates our communication channels, consumption practices and social media usage as it monitors our desires and influences our choices. See post: Surveillance capitalism and its discontents. AI eschatophobia builds on such anxieties.

Concerns about generative AI’s carbon footprint, the cost in computational and hence energy terms of the time it takes to train a network, and yet more Internet traffic further feed these anxieties — about global warming and planetary catastrophe.

Generative AI may prove to be less impressive over time, as we see its limits. It may be overhyped. But that’s for another discussion.

I am not of this camp, but eschatophobia is potent. See post: Vitruvius does steampunk. AI eschatology also speaks to urban apocalypse, the subject of another post.


  • Chomsky, Noam, Ian Roberts, and Jeffrey Watumull. “The False Promise of ChatGPT.” New York Times: Opinion, 8 March, 2023. Accessed 12 March 2023.
  • Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Cambridge, MA: Harvard University Press, 2021. 


  • Featured image is from MidJourney prompted with “eschatophobia.”


  1. Jon Awbrey says:

    I think you are perhaps missing a much larger group of observers who are quite well acquainted with the peculiar species of AI being ballyhooed du jour, have lived through more cycles of hype boom and hype bust than they can even enumerate, and are just plain too exasperated by Pop AI and human gullibility, just like in so many other scenes today, to raise more than the occasional desultory philippic.

    Artificial Intelligence Research (AIR) has never been monolithic — HAL notwithstanding — and this new spate of Industrial Sweatshop Intellectual Property Stripmines (ISIPS) is a far outlying splinter of the academic labs we used to know in the Before Times. May it turn put to be just another flash of fool’s gold in the pan.

    1. Jon Awbrey says:

      edit (to err is human): turn out to be

      1. As ever, thanks Jon for your sobering insight. There’s a lot of chatter about it all at the moment.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.