Industry, education and everyday users are making increasing use of large language models (LLMs) driven in part by the prominence of ChatGPT and other AI tools. The technology is developing at a pace. The analysis of commentators, critics and legislators also gain traction as they evaluate the implications of the technology and seek to influence its future.
Here I want to gather together some possible scenarios for the future of LLMs with an eye on education though relevant to all professions.
I assume that LLMs will be integrated into every application and website. In the 1980s many commentators predicted that standalone desktop systems such as word processors, graphics programs and CAD (computer-aided design) packages would be linked seamlessly via the Internet back to servers at the supplier’s side for updates, access to product libraries and to perform calculations offsite. We take that interconnection for granted now.
We can expect something similar for LLMs. Photoshop (beta) incorporates a “generative fill” feature built on generative AI capabilities similar to MidJourney. Microsoft’s Copilot feature uses ChatGPT4 to draft documents and respond to prompts. There are many reasons to expect that VLRs (virtual learning environments) such as Blackboard will incorporate text-based AI features. The prediction here is that regular users will eventually treat AI integration as they regard networked interconnectivity, location services on a smartphone, and text completion on messaging apps — as ordinary.
If this prediction is realised, conversations about AI shift to a different register. Instead of speculation about the impact of a standalone LLM service such as ChatGPT, the conversation moves to one informed by actual use, a recognition of strengths and limitations informed by practical perspectives and people’s experience of observable social, commercial and political impacts of ubiquitous AI.
Suppliers will develop payment models that will influence the development and use of LLMs. At the moment I pay OpenAI $20 per month for relatively unrestricted use of the ChatGPT platform in conversational mode, though other cost models would kick in were I to use it to for something more demanding, such as summarising large databases of interview transcripts.
As for many other services, we might expect LLMs to be accessed via subscriptions, paywalls, free-offer enticements. Apart from having to endure the imposition of rampant commercialisation, the cost of even prosaic uses of AI might become prohibitive for the average consumer. I’m thinking here of students who may have to restrict their use due to cost constraints.
Cost models may include advertising revenues. The general WWW is permeated and surrounded by advertisements, click bait, pop-ups, pay walls and other paraphernalia of the commercialisation of the Internet. One could imagine all of these devices deployed in the context of conversational AI.
That could range from banners, pop-ups inserted into conversational AI threads to endorsements delivered in conversational and personalised style within threads: “We are chatting about architecture of Barcelona. Do you know that JetTwo will fly you there for £50 about the time of your next holiday?” The AI might even have “negotiated” a deal with the airline.
Such commercial interventions might be ok were it sourced as an independent broker, but it’s also likely that the recommendation would have come from a sponsor, including a cost comparison website. As a consumer I’m thinking here of the annoyance (and necessity) of online advertising.
Of more insidious character is the use of AI in consumer profiling, the monetisation of our clicks and who knows what else might be disclosed or inferred from our interactions with conversational AI!
Content creators require payment and compensation. LLMs are successful now in part due to the ready availability of diverse and high quality content online able to be deployed as training data.
We know that content from Wikipedia was used for training ChatGPT models. Questions remain as to whether training an AI on material held under Creative Commons licensing constitutes “fair use.” In any case, contributors to Wikipedia do so without payment. As OpenAI and other LLM creators monetise their products then we might reasonably suppose that Wikipedia contributors have an interest in how their content is used by others, and demand payment or bar its use in a commercial context.
Similar issues arise in the case of blog posts, social media posts, educational material, fan fiction, and other content delivered for free by online authors. Published books and articles constitute a further challenge, as do books that publishers may distribute as open access, thereby diminishing the royalties payable to authors. For scholars who don’t typically rely on royalties to make a living, most are at a stage where the copyright landscape appears “interesting,” pending shake down in various pending legal cases.
Presumably my own outputs are amongst the millions of words in ChatGPT’s training data. I prompted ChatGPT4 to “Imagine you are Richard Coyne. Please write a 500 word blog post on the significance of the concept of attunement in understanding the city.” No doubt the platform produced a good blog post, but not in my style. I would not write “Cities are incredibly complex structures.” Nor would I conclude with advocacy of a particular response to the theme (though perhaps I should!): “The concept of attunement, therefore, holds immense potential in guiding the future of urban design and planning in the digital age.”
Such an exercise allays potential fears that I am being ripped off, or perhaps such AI forensics replace it with an anxiety that my output is not worthy or of sufficient volume to impact on the learning model. For a contrary position see an interesting podcast from the New York Times: The writers’ revolt against A.I. companies.
Thinking of the post-digital trend of recent decades, there’s the possibility of a reversion to non-AI generated content: less than perfect syntax, quirky and idiosyncratic production that could never have been produced by an AI, a return to the labour-intensive craft of writing. This could include certification that the content had no AI input, or simply a proud assertion or credit carried as a badge of honour. See post: When did we become post-digital.
Messing with AI
Good and bad actors will attempt to infiltrate and corrupt text-based AI. To reassert themselves in the face of AIs that “scrape” their content from the Internet, some fan fiction writers sought to confuse AI data-collection by publishing irreverent stories online to mess up the training corpus, according to a Times article.
Whether or not they achieved that goal, one can imagine various ways that people might try to corrupt the candidate pool for a training corpus. Other misuses of AI and AI methodologies include overtly or secretly exposing users to AI trained or tuned on data skewed towards a particular political position; using conversational AIs that masquerade as a humans in call centres to mollify customers who phone in with complaints.
The use of AI in espionage, covert operations and subversion has been well-aired in the press and elsewhere. See post: Organic cyberwars.
No-win-no-fee lawyers are already approaching students who wish to claim compensation from universities for requiring them to study online instead of in person during the COVID pandemic. Similar discontent could arise as students are expected to engage 1-1 with an AI for assistance and instruction in leu of meeting a tutor. After all, students may decide that 1-1 meetings with an AI can be just as good if not better than meeting with a personal tutor. The prospect of using ChatGPT4 to generate or assist in writing student feedback and even assessing work has occurred to many of us, as has the extent to which an AI can inspire, write, edit, correct, and improve an essay before it is submitted for marking.
Then there’s the prospect of de-skilling or re-skilling, adopting new modes of writing, reading and even thinking. An AI can summarise a long text, and transpose something difficult into everyday language. AI might even teach us to write better, or with greater brevity. What would it be like to write a blog or a book as a series of AI prompts that could then be reconstructed by the reader in an AI chat session? I doubt we’ll ever dispense with having to write clear and articulate explanatory prose, but AI is bound to have an influence, as writing and reading in the past were influenced by headlines, advertising slogans and texting.
I expect AI to serve as disruptor across many dimensions in the education space. I offer the scenarios above as possible contexts in which learning and teaching under the influence of AI will take place.
- Frenkel, S. and M. Barbaro (2023). “The writers’ revolt against A.I. companies.” New York Times: The Daily 18 July. Retrieved 19 July 2023, from https://podcasts.apple.com/gb/podcast/the-daily/id1200361736?i=1000621495806.
- Frenkel, S. and S. A. Thompson (2023). “‘Not for Machines to Harvest’: Data Revolts Break Out Against A.I.” Times 15 July. Retrieved 19 July 2023, from https://www.nytimes.com/2023/07/15/technology/artificial-intelligence-models-chat-data.html.