Article 6X803 Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

by
msmash
from Slashdot on (#6X803)
Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short. "When forced to keep it short, models consistently choose brevity over accuracy," Giskard researchers noted, explaining that models lack sufficient "space" to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like "be concise" can undermine a model's ability to debunk misinformation.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments