AI, like Jon Snow, knows nothing

This is a great illustration of how AIs don’t “know” anything – they generate an answer one word at a time based on a huge corpus of text, predicting which the most likely next word is based on what it thinks is relevant to the answer.

Even though Bing “knows” that Sunak is PM, as you can see from the second question, it can’t use that in an answer about public school members of the cabinet because the corpus of training data trends towards talking about Johnson’s cabinet (for a good reason – his percentage of public schoolers was much higher than that of Truss, so many people wrote about it).

Google’s bard has even less accuracy:

Almost every fact in this response is wrong. Johnson went to Eton, but is no longer PM; Sunak is no longer chancellor and went to Winchester, not Eton; and Truss is no longer in the cabinet and went to a state school.

The counterpoint to this is the idea that AI is only at the start of its journey, and all this will be ironed out “eventually”. My view is the opposite: I don’t think that, as currently constituted, large language model-based AI Is capable of much improvement. Like almost every kind of AI research in the last 30 years, it’s a one-trick pony rather than a generalised system. And the story of AI research since its foundation is littered with one-trick ponies which can’t be grafted onto a more generalised intelligence.

Animal-style intelligence is a set of emergent properties that evolved in parallel, not separately. Our ability to do vision and other sensations, abstract reasoning, and communications – which covers most of what we think of as intelligence – continually interacted with and reinforced each other over millions of years. We didn’t evolve any of those capabilities in isolation.

And that’s why all machine learning efforts that solve one thing at a time will fail to produce truly intelligent systems. You can’t just “solve the vision problem” then graft on a large language model, then crowbar in an abstract game-playing system and have something intelligent. It’s like putting together a jigsaw by ignoring the shapes and just cutting off bits of the pieces till they “fit” – you lose the complete picture.

57 Comments

  1. David Emmett: @ianbetteridge Very good. The trouble with the LLM is AGI crowd is that they don’t understand the function of intelligence. Intelligence evolved as a tool for navigating the physical world. Wrong decisions about the physical world were literally punished by death. There is no punishment mechanism built into LLMs, so there is, and can be, no intelligence. Boston Dynamics will develop AGI before OpenAI, Google, or Microsoft. via masto.ai

    Like

    Reply

  2. Erik Holten: @ianbetteridge Everything in your piece is absolutely true, and should be emphasized more often to people who overly anthropomorphize things that happen to produce grammatically well-formed and semantically sensible language.However.The key phrase in the post is “as currently constituted”. Academics and industry authorities support my view that NLU tech like the LLMs, when connected to reliable world models of ground truths and rules, may get closer to “AI”:https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
    ChatGPT Gets Its “Wolfram Superpowers”! via mastodon.cloud

    Like

    Reply

  3. Ian Betteridge: @Holten See that’s the bit that I disagree with profoundly. I don’t think you can just “connect” bits of pseudo-intelligent behaviour to each other and make a thing that’s intelligent. Perception and commication, for example, co-evolved: they formed a feedback loop which delivers a particular model of reality *while developing*. That’s important. via mastodon.me.uk

    Like

    Reply

  4. OddOpinions5: @ianbetteridge 1for a general audience, the word emergent is jargon2looking at the incredible progress in computers over the last 50 years, predicting the next is futurology, eg akin to astrologyyou maybe right; progress like in airplanes, will stallor you could be totally wrongbut the thing is, neither you nor anyone else has any clue what the future will bring, so stop being so certain via mas.to

    Like

    Reply

  5. Gavin: @ianbetteridge I see the way humans learn language, we may be able to say or write the words, but we don’t yet know what they mean. However, unlike ChatGPT in its current form, we are allowed to learn what they mean over time. Bing in its public form is still apparently “stock”, like Arnie before Lynda Hamilton changed the jumper in his head in T2. 😂 via toot.wales

    Like

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.