Article 60VH3 AI’s progress isn’t the same as creating human intelligence in machines

AI’s progress isn’t the same as creating human intelligence in machines

by
Oren Etzioni
from MIT Technology Review on (#60VH3)
Story Image

The term artificial intelligence" really has two meanings. AI refers both to the fundamental scientific quest to build human intelligence into computers and to the work of modeling massive amounts of data. These two endeavors are very different, both in their ambitions and in the amount of progress they have made in recent years.

Scientific AI, the quest to both construct and understand human-level intelligence, is one of the most profound challenges in all of science; it dates back to the 1950s and is likely to continue for many decades.

Data-centric AI, on the other hand, began in earnest in the 1970s with the invention of methods for automatically constructing decision trees" and has exploded in popularity over the last decade with the resounding success of neural networks (now dubbed deep learning"). Data-centric artificial intelligence has also been called narrow AI" or weak AI," but the rapid progress over the last decade or so has demonstrated its power.

Deep-learning methods, coupled with massive training data sets plus unprecedented computational power, have delivered success on a broad range of narrow tasks from speech recognition to game playing and more. The artificial-intelligence methods build predictive models that grow increasingly accurate through a compute-intensive iterative process. In previous years, the need for human-labeled data to train the AI models has been a major bottleneck in achieving success. But recently, research and development focus has shifted to ways in which the necessary labels can be created automatically, based on the internal structure of the data.

The GPT-3 language model released by OpenAI in 2020 exemplifies both the potential and the challenges of this approach. GPT-3 was trained on billions of sentences. It automatically generates highly plausible text, and even sensibly answers questions on a broad range of topics, mimicking the same language that a person might use.

This essay is part of MIT Technology Review's 2022 Innovators Under 35 package recognizing the most promising young people working in technology today. See the full list here or explore the winners in this category below.

But GPT-3 suffers from several problems that researchers are working to address. It's often inconsistent-you can get contradictory answers to the same question. Second, GPT-3 is prone to hallucinations": when asked who the president of the United States was in 1492, it will happily conjure up an answer. Third, GPT-3 is an expensive model to train and expensive to run. Fourth, GPT-3 is opaque-it's difficult to understand why it drew a particular conclusion. Finally, since GPT-3 parrots the contents of its training data, which is drawn from the web, it often spews out toxic content, including sexism, racism, xenophobia, and more. In essence, GPT-3 cannot be trusted.

Despite these challenges, researchers are investigating multi-modal versions of GPT-3 (such as DALL-E2), which create realistic images from natural-language requests. AI developers are also considering how to use these insights in robots that interact with the physical world. And AI is increasingly being applied to biology, chemistry, and other scientific disciplines to glean insights from the massive data and complexities in those fields.

The bulk of the rapid progress today is in this data-centric AI, and the work of this year's 35 Innovators Under 35 winners is no exception. While data-centric AI is powerful, it has key limitations: the systems are still designed and framed by humans. A few years ago, I wrote an article for MIT Technology Review called How to know if artificial intelligence is about to destroy civilization." I argued that successfully formulating problems remains a distinctly human capability. Pablo Picasso famously said, Computers are useless. They only give you answers."

We continue to anticipate the distant day when AI systems can formulate good questions-and shed more light on the fundamental scientific challenge of understanding and constructing human-level intelligence.

Oren Etzioni is CEO of the Allen Institute for AI and a judge for this year's 35 Innovators competition.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments