AI Armagedon

Should AI research be shut down? A group called the Future of Life Institute warns against the recent developments in generative AI platforms, prompted especially by advances in natural language processing, e.g. chatGPT. The open letter follows their similar warning of 2015. Both letters have several high-profile signatories who are involved in AI development. The most recent open letter is headed “Pause Giant AI Experiments,” and asks

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

The letter offers directions for AI research.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

They define an agenda for AI-related research, covering topics such as:

  • AI regulation
  • oversight of computational capability
  • auditing and certification systems
  • attributing liabilities where AI causes harm
  • funding technical AI safety research
  • resourcing for institutions dealing with the dramatic economic and political disruptions that AI will cause.

If anything, I think this agenda elevates the allure of AI. For some, that AI might enter into such risky territory (“control of civilization”) demonstrates that it is something with which to get involved, commercially at least.

Of interest in education, the open letter also advocates for the development of

  • watermarking systems to help distinguish real from synthetic.

The reference to watermarking is interesting. A key article by Kirchenbauer et al shows how it would be possible for AI platform developers, e.g. OpenAI, to introduced quirks into AI text generation — invisible to readers but detectable by statistical analysis. That’s similar to the way that speakers and writers exhibit identifiable linguistic quirks: word frequencies, turns-of-phrase, etc. The methods are explained nicely in a video by Mike Pound.

Watermarking raises questions about authorship, authenticity and plagiarism. Pound makes a helpful point about AI in the educational context, with which I concur, and that can be extended to attitudes about AI in general:

“In the long term, I think that what we’ll probably end up doing is not worrying quite so much about whether an essay is generated this way. We’ll be asking different things of students — maybe working with the AI or whatever the AI looks like in five month’s time, depending how fast it’s going. You might see it’s a more collaborative thing and it’s just a tool that we use.”

We are at a moment of transition.

“But at the moment we’re in that slightly odd position between it’s become a tool that we use and we know how to use it, and it’s messing around with all our exams. That’s the place we are in.”

References

2 Comments

Leave a Reply