Article 6FSYR Will ChatGPT’s hallucinations be allowed to ruin your life?

Will ChatGPT’s hallucinations be allowed to ruin your life?

by
Ashley Belanger
from Ars Technica - All content on (#6FSYR)
openai-lies-rebus-800x450.png

Enlarge (credit: Aurich Lawson)

Bribery. Embezzlement. Terrorism.

What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.

That's according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses.

Read 76 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments