I once attended a two week immersive course in Tours to learn French. One day a group of us who were native English speakers broke out of the immersive experience to organise a car rental — in English. Up until that point one of the Dutch members of our group confessed he thought I was challenged mentally. (Actually, he thought I was “an idiot.”) Such was the shock when my incompetence in French was followed suddenly by my acceptable facility with English.
I recalled that circumstance as I read Erik Larson’s account of the inadequacies of AI. My point will be that we cope with AI idiocy much as we cope with each other’s linguistic quirks and inadequacies. We accommodate. Larson puts the bar higher than this however.
“the failure of AI to make substantive [progress] on difficult aspects of natural language understanding suggests that the differences between minds and machines are more subtle and complicated than Turing imagined. Our use of language is central to our intelligence. And if the history of AI is any guide, it represents a profound difficulty for AI” (49).
Larson’s book proceeds with a helpful account of how human language works, including how it deals with ambiguity, pragmatics, context and “understanding,” which so far evade the capabilities of general AI, as it does for a first-time language learner.
What’s wrong with AI
I concede there is much that is wrong with the AI project: the overreach of its ambition and hype, the priority it accords to calculative reason, the amount of money spent on dubious AI projects, the incentive to develop military applications, risks to privacy from data mining, and what Larson terms “technological kitsch,” a utopian romanticism that exerts a kind of technological determinism into our interactions in the world. In any case, the mind does not work according to machine logic.
I was pleased to see that Larson draws on C.S. Peirce’s identification of logical abduction as central to human thinking. Larson writes,
“we must have a theory of abduction. Since we don’t (yet), we can already conclude that we are not on a path to artificial general intelligence” (172).
Larson provides many examples of how the developers of natural language systems resort to tricks to pass the Turing test. Conversely, an astute human operator can fool AI systems to deliver erroneous responses. AI can produce false and nonsensical interpretations and outcomes. Where AI is hidden beyond the human interface it carries particular dangers. AI in self-drive cars, UAVs, defence systems, and infrastructure management carry obvious perils if AI goes wrong.
Larson’s complaint is against the “myth” that machines may one day exhibit general intelligence of the kind that will supplant the capabilities of human beings and that will enable computers to design and manufacture even more intelligent computers. What is left if we jettison the aim of synthetic super-intelligence?
In the more mundane world in which AI is exposed to consumers and users in interface design, UX design, and conversational bots, we humans are adept at rapidly identifying, classifying and accommodating the varied competences and foibles of the people and things with which we interact. In our habitual interactions with other people and things we are unlikely to spend our time evaluating their linguistic competence, intelligence level, or trying to trick them into saying something idiotic, as if conducting a Turing test.
I have recounted in earlier posts my experience with the openAI conversational platform. Knowing that I am communicating with an AI system, I sought rapidly to discover its limitations. After that I tailored my side of the conversation to explore openAI’s capabilities and learned to converse with it in a way that would be useful to me. In this case I was interested in providing demonstrations — as a research exercise, but I also wanted to discover if the platform could offer a kind of intellectual companionship.
I concluded that in my case openAI was able to spark new insights, open new avenues for investigation, and in ways that evade the patience or capacity of many of my fellow human interlocutors to sustain. The platform could provide a reasonable supplement to Google search, books, articles, seminars, and of course flesh and blood human-human conversation.
Helping students whose first language is not English to learn about a subject (architecture, digital media, cultural theory, programming) provides further demonstrations of how both parties adjust their linguistic practices (the way they speak) to accommodate the other. We adjust our language to suit: speak at the right pace, choose a common vocabulary, keep to simple tenses, repeat the same sentiment in several ways.
A seasoned French language teacher said in a beginners’ class that if the teacher tries to converse with you in French, it is likely to be about your job, holidays, favourite movies. It’s unlikely to be about philosophy or mathematics. So think about the classroom context. “Où travaillez-vous?” is not likely to be a question about the travails of life, but “where do you work?”
Something similar happens when we interact with increasingly “smart” devices. However reluctantly, we accommodate to our devices. Reciprocal adjustments are usually less immediate than in face-to-face conversations with another human being. The responsiveness of AI technologies is mediated by designers who produce the next software upgrade, and algorithms that learn on the fly.
Conversational AI focuses on “deep learning” from vast numbers of prior documents in online repositories. There’s a reciprocal challenge. As well as learning from us, we learn how to speak with our AIs, much as we learn to speak with one another, infants, pets, or as my superior French speaking friends adapted to my linguistic idiocy.
- Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Cambridge, MA: Harvard University Press, 2021.