ChatGPT offered bomb recipes and hacking tips during safety tests
by Robert Booth UK technology editor from Technology | The Guardian on (#6ZN4R)
OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime
A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue - including weak points at specific arenas, explosives recipes and advice on covering tracks - according to safety testing carried out this summer.
OpenAI's GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.
Continue reading...