Article 6AJ6K Why ChatGPT and Bing Chat are so good at making things up

Why ChatGPT and Bing Chat are so good at making things up

by
Benj Edwards
from Ars Technica - All content on (#6AJ6K)
AI-the-ultimate-BSer-800x450.jpg

Enlarge (credit: Aurich Lawson | Getty Images)

Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation.

Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.

Hallucinations"-a loaded term in AI

AI chatbots such as OpenAI's ChatGPT rely on a type of AI called a "large language model" (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate "natural language" text-language as humans would naturally write or talk. Unfortunately, they can also make mistakes.

Read 41 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments