Article 6TZRW To help AIs understand the world, researchers put them in a robot

To help AIs understand the world, researchers put them in a robot

by
Jacek Krywko
from Ars Technica - All content on (#6TZRW)
Story Image

Large language models like ChatGPT display conversational skills, but the problem is they don't really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word hot" means because we've been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited-it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language," says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

Read full article

Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments