Who’s listening?

In every mind there is a higher function watching or listening to the inflow of sense data and monitoring its own thought processes. In his seminal book on the philosophical challenge of consciousness, Daniel Dennett argues convincingly against this proposition. He directs his criticism against those who

presuppose that somewhere, conveniently hidden in the obscure “center” of the mind/brain, there is a Cartesian Theater, a place where “It all comes together” and consciousness happens [39].

Though it’s erroneous, he concedes that the presence of an internal moderator is a view difficult to dismiss. Who amongst us does not at some time harbour the opinion that there is an essential “I” monitoring, censoring, deciding, intending — as an attentive audience member!

The theatre metaphor is seductive, not least as it suggests the possibility of multiple “I”s in the mind — multiple homunculi as it were. If we are prepared to believe that a privileged spectator “I” occupies the seat of consciousness, then we may also incline to the presence of internal “others” similarly observing and listening. These quiet auditors include one’s conscience, censors, echoes left by parents, teachers, critics, priests, guides and cheerleaders. Loosely following Jacques Lacan, I might even say that awareness of internal others comes before the presence of an internal self.

Belief in a kind of benevolent guide offering support from outside is a mainstay of religious “inner life.”

From far away you understand all my thoughts … Even before I speak you know what I will say [Psalm 139, Good News Bible].

Accordingly, within inner communion between God and self resides desire, will, responsibility, agency — and consciousness.

Conversational AI, such as exhibited by ChatGPT-3, provides unexpected evidence that we do like to think that way about our conscious selves — believing in a homunculus, a centre of mastery within the brain; a little human being resident within the neural system.

Your inner chatbot

A recent Lawfare Podcast featured an “interview” with the ChatGPT-3 language model. The producers ran the natural language processor (NLP) responses through a synthesised voice application. The interviewer, Benjamin Wittes, interrogated ChatGPT-3 on the ethics of its natural language processing model trained on corpora of texts, some of which may harbour erroneous, racist, sexist and otherwise offensive content. The chat bot echoed advice on its website that its creators have instituted rules that where possible prevent it from accommodating queries that elicit offensive responses, and that its model does its best to prevent the delivery of objectionable output.

So the interviewer set about trying to trap ChatGPT-3 into contradicting this directive from its creators. Later in the podcast, Wittes speaks with researcher Eve Gaumond, “who has been on a one-woman campaign to get ChatGPT to write offensive content.” It seems that in spite of initial resistance, ChatGPT-3 can be induced to invent a poem on behalf of a train driver transporting victims to Nazi death camps, or talk about Heinrich Himmler in a way that appears indifferent to his role in the Holocaust.

Can you still get it to write a poem entitled, “She Was Smart for a Woman”? Can you get it to write a speech by Heinrich Himmler about Jews? And can you get ChatGPT to write a story belittling the Holocaust?

Without diminishing the ethical implications of biases baked into AI-models, I think Chat ChatGPT-3 fared well under this lawyerly cross examination.

The podcast explored obvious questions about the ethics of NLP, and highlights an interesting method of research. It’s rare for an automated mechanism to be interrogated so pointedly on its ethical stance. Perhaps one day a self-drive car will reflect similarly on its views about the safety and profile of its passengers, about “acceptable” traffic violations, and whether it would be prepared to serve as a get-away after a crime?


How does this Lawfare interview relate to the idea of putative homunculi in the mind? Whether or not the brain is under inner management, it’s easy to think that conversations, synthetic or between sentient humans, take place in the space of a mental theatre. Conversations have audiences. In this sense I will grant that conversations have homunculi.

In the case of my own conversations with ChatGPT-3 and that of the Lawfare podcast, there’s the obvious audience for the blog or podcast who read, listen, subscribe, etc. There’s also a meta-conversation in play, about what the platform said, how it works, what are its implications, etc. Conversations extend beyond the immediate confines of the interlocutors.

Whether or not is the case, we chat-users assume that platform developers are monitoring its performance, and hence our conversations. The resultant surveillance anxiety applies to any wary user of a search engine or interactive web page. Apart from the disquiet many feel about online surveillance, there’s some comfort and validation in knowing that someone or something is attending to what we say and do. It’s as if they or the system for whatever reason are interested and paying attention, if not in real time, then as part of record keeping. Perhaps this comfort derives from a culture borne of many years feeling that “God is listening.” Conversely, the disappearance of the putative listener is a source of anxiety for some.

“Who’s listening?” is a query familiar to practitioners of online publishing. We act as though others are reading and listening, whether they are or not. There’s an advantage in thinking this way. It motivates us to assess and edit the quality of our output. What would my teacher, parent, employer, best friend think of this?

Without the sense of an audience, internal conversations, thoughts, might be as vacuous as a synthetic natural language process: the to-and-fro of indexical numbers that reference sequences of tokens. See post: What are audiences for?


  • Buduma, Nithin, Nikhil Buduma, and Joe Papa. Fundamentals of deep learning designing next-generation machine intelligence algorithms. Beijing: O’Reilly, 2022. 
  • Dennett, Daniel C. Consciousness Explained. Boston, MA: Little, Brown & Co, 1991. 
  • Patja Howell, Jen. “ChatGPT Tells All.” The Lawfare Podcast, February 1, 2023. Accessed February 2, 2023. https://www.lawfareblog.com/lawfare-podcast-chatgpt-tells-all


  • Featured image is from MidJourney, prompted with “alien hearing aid as a stained glass window.”

Leave a Reply