Disintegrated intelligence

One of the impediments to convincingly intelligent systems is that their functions are specific. A smart chess playing program may be able to win against a chess master, but it can’t author a blog about AI, or make an omelette.

Nor can it play other games, such as Pictionary — that is, unless it’s programmed to perform those tasks, and with the necessary interfaces. Most consumer-oriented smart systems are mono-functional: smartphone face recognition, the Shazam app that identifies a piece of music in seconds, translation apps, etc.

These can be combined into a single system or device (e.g. a smartphone). But we might expect a smart system to do more than combine an array of clever programs. It needs to integrate these functional components, i.e.

  • incorporate some mechanism for activating each AI system as it is required: “this is a chess-playing situation so it’s time to activate the chess algorithm”
  • bring several systems to bear on a particular problem-solving task: to navigate through the city the mobile robot needs to recognise visual features, pick up sonic cues and interpret spoken instructions
  • manage conflicting information and logics delivered through its sensors and rule sets: it sounds like I’m being told to turn right, but it looks (visually) like a dead end.
  • learn across its various functional components, and develop new functionalities.

None of these integrative tasks is/are trivial.

The audio attached to this post is a one-sided conversation about AI integration and some of its problems, introduced via a class Q&A activity about AI. We set up a notional pyramid, with a human at the apex supported by a series of chat bots, fed by hand-written questions from radiating tables of human interrogators.

I’m over-dramatising, but it looked like this.


To listen as a podcast on a smartphone or other podcast app see Podcast instructions  

Notes

 

Leave a Reply