Article 64SV7 Why Mastering Language Is So Difficult For AI

Why Mastering Language Is So Difficult For AI

by
EditorDavid
from Slashdot on (#64SV7)
Long-time Slashdot reader theodp writes: UNDARK has an interesting interview with NYU professor emeritus Gary Marcus (PhD in brain and cognitive sciences, MIT) about Why Mastering Language Is So Difficult for AI. Marcus, who has had a front-row seat for many of the developments in AI, says we need to take AI advances with a grain of salt. Starting with GPT-3, Marcus begins, "I think it's an interesting experiment. But I think that people are led to believe that this system actually understands human language, which it certainly does not. What it really is, is an autocomplete system that predicts next words and sentences. Just like with your phone, where you type in something and it continues. It doesn't really understand the world around it. "And a lot of people are confused by that. They're confused by that because what these systems are ultimately doing is mimicry. They're mimicking vast databases of text. And I think the average person doesn't understand the difference between mimicking 100 words, 1,000 words, a billion words, a trillion words - when you start approaching a trillion words, almost anything you can think of is already talked about there. And so when you're mimicking something, you can do that to a high degree, but it's still kind of like being a parrot, or a plagiarist, or something like that. A parrot's not a bad metaphor, because we don't think parrots actually understand what they're talking about. And GPT-3 certainly does not understand what it's talking about." Marcus also has cautionary words about Google's LaMDA ("It's not sentient, it has no idea of the things that it is talking about."), driverless cars ("Merely memorizing a lot of traffic situations that you've seen doesn't convey what you really need to understand about the world in order to drive well"), OpenAI's DALL-E ("A lot of AI right now leverages the not-necessarily-intended contributions by human beings, who have maybe signed off on a 'terms of service' agreement, but don't recognize where this is all leading to"), and what's motivating the use of AI at corporations ("They want to solve advertisements. That's not the same as understanding natural language for the purpose of improving medicine. So there's an incentive issue."). Still, Marcus says he's heartened by some recent AI developments: "People are finally daring to step out of the deep-learning orthodoxy, and finally willing to consider "hybrid" models that put deep learning together with more classical approaches to AI. The more the different sides start to throw down their rhetorical arms and start working together, the better."

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments