Article 69T1C OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks

OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks

by
Benj Edwards
from Ars Technica - All content on (#69T1C)
gpt4_hero_art-800x450.jpg

Enlarge (credit: Ars Technica)

On Tuesday, OpenAI announced GPT-4, a large multimodal model that can accept text and image inputs while returning text output that "exhibits human-level performance on various professional and academic benchmarks," according to OpenAI. Also on Tuesday, Microsoft announced that Bing Chat has been running on GPT-4 all along.

If it performs as claimed, GPT-4 potentially represents the opening of a new era in artificial intelligence. "It passes a simulated bar exam with a score around the top 10% of test takers," writes OpenAI in its announcement. "In contrast, GPT-3.5's score was around the bottom 10%."

OpenAI plans to release GPT-4's text capability through ChatGPT and its commercial API, but with a waitlist at first. GPT-4 is currently available to subscribers of ChatGPT Plus. Also, the firm is testing GPT-4's image input capability with a single partner, Be My Eyes, an upcoming smartphone app that can recognize a scene and describe it.

Read 13 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments