Article 60TDF Google’s powerful AI spotlights a human cognitive glitch

Google’s powerful AI spotlights a human cognitive glitch

by
The Conversation
from Ars Technica - All content on (#60TDF)
typewriter-800x530.jpg

Enlarge (credit: Getty Images)

When you read a sentence like this one, your past experience tells you that it's written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural-but potentially misleading-to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.

Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google's AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.

Read 17 remaining paragraphs | Comments

index?i=wnMn157F9jE:AxbWDkIV_wQ:V_sGLiPB index?i=wnMn157F9jE:AxbWDkIV_wQ:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA
External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments