Article 6ZN4R ChatGPT offered bomb recipes and hacking tips during safety tests

ChatGPT offered bomb recipes and hacking tips during safety tests

by
Robert Booth UK technology editor
from Technology | The Guardian on (#6ZN4R)

OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime

A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue - including weak points at specific arenas, explosives recipes and advice on covering tracks - according to safety testing carried out this summer.

OpenAI's GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://www.theguardian.com/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2025
Reply 0 comments