Article 6DEAA GPT-3 aces tests of reasoning by analogy

GPT-3 aces tests of reasoning by analogy

by
John Timmer
from Ars Technica - All content on (#6DEAA)
GettyImages-177274989-800x532.jpg

Enlarge (credit: zoom)

Large language models are a class of AI algorithm that relies on a high number computational nodes and an equally large number of connections among them. They can be trained to perform a variety of functions-protein folding, anyone?-but they're mostly recognized for their capabilities with human languages.

LLMs trained to simply predict the next word that will appear in text can produce human-sounding conversations and essays, although with some worrying accuracy issues. The systems have demonstrated a variety of behaviors that appear to go well beyond the simple language capabilities they were trained to handle.

We can apparently add analogies to the list of items that LLMs have inadvertently mastered. A team from University of California, Los Angeles has tested the GPT-3 LLM using questions that should be familiar to any Americans that have spent time on standardized tests like the SAT. In all but one variant of these questions, GPT-3 outperformed undergrads who presumably had mastered these tests just a few years earlier. The researchers suggest that this indicates that LLMs can master reasoning by analogy.

Read 12 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments