Article 6H4BS Which AI Model Provides the 'Best' Answers?

Which AI Model Provides the 'Best' Answers?

by
BeauHD
from Slashdot on (#6H4BS)
An anonymous reader quotes a report from Ars Technica: For those looking for a more rigorous way of comparing various models, the folks over at the Large Model Systems Organization (LMSys) have set up Chatbot Arena, a platform for generating Elo-style rankings for LLMs based on a crowdsourced blind-testing website. Chatbot Arena users can enter any prompt they can think of into the site's form to see side-by-side responses from two randomly selected models. The identity of each model is initially hidden, and results are voided if the model reveals its identity in the response itself. The user then gets to pick which model provided what they judge to be the "better" result, with additional options for a "tie" or "both are bad." Only after providing a pairwise ranking does the user get to see which models they were judging, though a separate "side-by-side" section of the site lets users pick two specific models to compare (without the ability to contribute a vote on the result). Since its public launch back in May, LMSys says it has gathered over 130,000 blind pairwise ratings across 45 different models (as of early December). Those numbers seem poised to increase quickly after a recent positive review from OpenAI's Andrej Karpathy that has already led to what LMSys describes as "a super stress test" for its servers. Chatbot Arena's thousands of pairwise ratings are crunched through a Bradley-Terry model, which uses random sampling to generate an Elo-style rating estimating which model is most likely to win in direct competition against any other. Interested parties can also dig into the raw data of tens of thousands of human prompt/response ratings for themselves or examine more detailed statistics, such as direct pairwise win rates between models and confidence interval ranges for those Elo estimates. Chatbot Arena's latest public leaderboard update shows a few proprietary models easily beating out a wide range of open-source alternatives. OpenAI's ChatGPT-4 Turbo leads the pack by a wide margin, with only an older GPT-4 model ("0314," which was discontinued in June) coming anywhere close on the ratings scale. But even months-old, defunct versions of GPT-3.5 Turbo outrank the highest-rated open-source models available in Chatbot Arena's testbed. Anthropic's proprietary Claude models also feature highly in Chatbot Arena's top rankings. Oddly enough, though, the site's blind human testing tends to rank the older Claude-1 slightly higher than the subsequent releases of Claude-2.0 and Claude-2.1. Among the tested non-proprietary models, the Llama-based Tulu 2 and 01.ai's Yi get rankings that are comparable to some older GPT-3.5 implementations. Past that, there's a slow but steady decline until you get to models like Dolly and StableLM at the bottom of the pack (amid older versions of many models that have more recent, higher-ranking updates on Chatbot Arena's charts).

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments