Thumbnail 1664513
thumbnail
Large (256x256)

Articles

Forcing LLMs to be evil during training can make them nicer in the long run
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of activity in large language models-and turning on those patterns during training can, paradoxically, prevent the model from adopting the related traits. Large language models have recently acquired a reputation for behaving badly. In April, ChatGPT suddenly...
The Download: fixing ‘evil’ AI, and the White House’s war on science
This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Forcing LLMs to be evil during training can make them nicer in the long run Large language models have recently acquired a reputation for behaving badly. In April, ChatGPT suddenly became an aggressive...
1