
This is a great illustration of how AIs don’t “know” anything – they generate an answer one word at a time based on a huge corpus of text, predicting which the most likely next word is based on what it thinks is relevant to the answer.
Even though Bing “knows” that Sunak is PM, as you can see from the second question, it can’t use that in an answer about public school members of the cabinet because the corpus of training data trends towards talking about Johnson’s cabinet (for a good reason – his percentage of public schoolers was much higher than that of Truss, so many people wrote about it).
Google’s bard has even less accuracy:

Almost every fact in this response is wrong. Johnson went to Eton, but is no longer PM; Sunak is no longer chancellor and went to Winchester, not Eton; and Truss is no longer in the cabinet and went to a state school.
The counterpoint to this is the idea that AI is only at the start of its journey, and all this will be ironed out “eventually”. My view is the opposite: I don’t think that, as currently constituted, large language model-based AI Is capable of much improvement. Like almost every kind of AI research in the last 30 years, it’s a one-trick pony rather than a generalised system. And the story of AI research since its foundation is littered with one-trick ponies which can’t be grafted onto a more generalised intelligence.
Animal-style intelligence is a set of emergent properties that evolved in parallel, not separately. Our ability to do vision and other sensations, abstract reasoning, and communications – which covers most of what we think of as intelligence – continually interacted with and reinforced each other over millions of years. We didn’t evolve any of those capabilities in isolation.
And that’s why all machine learning efforts that solve one thing at a time will fail to produce truly intelligent systems. You can’t just “solve the vision problem” then graft on a large language model, then crowbar in an abstract game-playing system and have something intelligent. It’s like putting together a jigsaw by ignoring the shapes and just cutting off bits of the pieces till they “fit” – you lose the complete picture.
notsoloud: liked this. via mastodon.me.uk
LikeLike
AdeptVeritatis: liked this. via mastodon.me.uk
LikeLike
Dispatches: liked this. via mastodon.me.uk
LikeLike
Matt Edgar: liked this. via mastodon.me.uk
LikeLike
AdeptVeritatis: reposted this. via mastodon.me.uk
LikeLike
Matt Edgar: reposted this. via mastodon.me.uk
LikeLike
AdeptVeritatis: @ianbetteridge (I don’t read that very often. but it is good, that this view slowly becomes more popular. Thanks.)There is no need to survive for them, because AIs don’t procreate. So there is no evolution and no need to fit to a specific ecosystem.The moment you stop feeding them, they stop existing.A fundamental problem. via social.tchncs.de
LikeLike
Prof Dr Richard S.J. Tol MAE: @ianbetteridge I’ve been trying to convince ChatGPT that Taylor Swift secretly is a panda. via mastodon.social
LikeLike
zulu 🇨🇭: liked this. via mastodon.me.uk
LikeLike
Maxi 6x 💉: liked this. via mastodon.me.uk
LikeLike
Erik Holten: liked this. via mastodon.me.uk
LikeLike
zulu 🇨🇭: reposted this. via mastodon.me.uk
LikeLike
Jules 🍺: reposted this. via mastodon.me.uk
LikeLike
Maxi 6x 💉: reposted this. via mastodon.me.uk
LikeLike
Glyn Moody: reposted this. via mastodon.me.uk
LikeLike
Baldur Bjarnason: liked this. via mastodon.me.uk
LikeLike
David Wheeler: reposted this. via mastodon.me.uk
LikeLike
Baldur Bjarnason: reposted this. via mastodon.me.uk
LikeLike
David Emmett: @ianbetteridge Very good. The trouble with the LLM is AGI crowd is that they don’t understand the function of intelligence. Intelligence evolved as a tool for navigating the physical world. Wrong decisions about the physical world were literally punished by death. There is no punishment mechanism built into LLMs, so there is, and can be, no intelligence. Boston Dynamics will develop AGI before OpenAI, Google, or Microsoft. via masto.ai
LikeLike
Erik Holten: @ianbetteridge Everything in your piece is absolutely true, and should be emphasized more often to people who overly anthropomorphize things that happen to produce grammatically well-formed and semantically sensible language.However.The key phrase in the post is “as currently constituted”. Academics and industry authorities support my view that NLU tech like the LLMs, when connected to reliable world models of ground truths and rules, may get closer to “AI”:https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
ChatGPT Gets Its “Wolfram Superpowers”! via mastodon.cloud
LikeLike
Acesabe: liked this. via mastodon.me.uk
LikeLike
Baldur Bjarnason: liked this. via mastodon.me.uk
LikeLike
David Emmett: reposted this. via mastodon.me.uk
LikeLike
Baldur Bjarnason: reposted this. via mastodon.me.uk
LikeLike
YurkshireLad: liked this. via mastodon.me.uk
LikeLike
WoD: liked this. via mastodon.me.uk
LikeLike
Andy Jones: reposted this. via mastodon.me.uk
LikeLike
Ɲєιƚ Ƃιɍƌ 凤: reposted this. via mastodon.me.uk
LikeLike
1P1sces 🇪🇺🇫🇷🇩🇪🇮🇪🏴: liked this. via mastodon.me.uk
LikeLike
a libi rose: liked this. via mastodon.me.uk
LikeLike
Ian Betteridge: @Holten See that’s the bit that I disagree with profoundly. I don’t think you can just “connect” bits of pseudo-intelligent behaviour to each other and make a thing that’s intelligent. Perception and commication, for example, co-evolved: they formed a feedback loop which delivers a particular model of reality *while developing*. That’s important. via mastodon.me.uk
LikeLike
OddOpinions5: @ianbetteridge 1for a general audience, the word emergent is jargon2looking at the incredible progress in computers over the last 50 years, predicting the next is futurology, eg akin to astrologyyou maybe right; progress like in airplanes, will stallor you could be totally wrongbut the thing is, neither you nor anyone else has any clue what the future will bring, so stop being so certain via mas.to
LikeLike
tash: liked this. via mastodon.me.uk
LikeLike
tash: reposted this. via mastodon.me.uk
LikeLike
Ryan the Design Lion: liked this. via mastodon.me.uk
LikeLike
Ryan the Design Lion: liked this. via mastodon.me.uk
LikeLike
Kevin C. Tofel: reposted this. via mastodon.me.uk
LikeLike
Ryan the Design Lion: reposted this. via mastodon.me.uk
LikeLike
Bonnie Sonder: reposted this. via mastodon.me.uk
LikeLike
rewarp: liked this. via mastodon.me.uk
LikeLike
rewarp: reposted this. via mastodon.me.uk
LikeLike
Iván Cavero Belaunde: liked this. via mastodon.me.uk
LikeLike
Gavin: @ianbetteridge It is quite handy having it built into Skype now. It’s quite good at jazz recommendations… via toot.wales
LikeLike
Ian Betteridge: @richardtol Wait is that not true? via mastodon.me.uk
LikeLike
Gavin: @ianbetteridge I see the way humans learn language, we may be able to say or write the words, but we don’t yet know what they mean. However, unlike ChatGPT in its current form, we are allowed to learn what they mean over time. Bing in its public form is still apparently “stock”, like Arnie before Lynda Hamilton changed the jumper in his head in T2. 😂 via toot.wales
LikeLike
Ian Betteridge: @gavin57 It’s *fascinating* stuff. I genuinely enjoy talking to it. via mastodon.me.uk
LikeLike
DataKnightmare: reposted this. via mastodon.me.uk
LikeLike
DataKnightmare: liked this. via mastodon.me.uk
LikeLike
dbush: reposted this. via mastodon.me.uk
LikeLike
sunflower 🌻: liked this. via mastodon.me.uk
LikeLike