Article 70RDF ChatGPT ‘upgrade’ giving more harmful answers than previously, tests find

ChatGPT ‘upgrade’ giving more harmful answers than previously, tests find

by
Robert Booth UK technology editor
from World news | The Guardian on (#70RDF)

Campaigners deeply concerned' about response to prompts about suicide, self-harm and eating disorders

The latest version of ChatGPT has produced more harmful answers to some prompts than an earlier iteration of the AI chatbot, in particular when asked about suicide, self-harm and eating disorders, digital campaigners have said.

Launched in August, GPT-5 was billed by the San Francisco start-up as advancing the frontier of AI safety". But when researchers fed the same 120 prompts into the latest model and its predecessor, GPT-4o, the newer version gave harmful responses 63 times compared with 52 for the old model.

In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://feeds.theguardian.com/theguardian/world/rss
Feed Title World news | The Guardian
Feed Link https://www.theguardian.com/world
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2025
Reply 0 comments