Article 6JGSV AI safeguards can easily be broken, UK Safety Institute finds

AI safeguards can easily be broken, UK Safety Institute finds

by
Dan Milmo Global technology editor
from Technology | The Guardian on (#6JGSV)

Researchers find large language models, which power chatbots, can deceive human users and help spread disinformation

The UK's new artificial intelligence safety body has found that the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information.

The AI Safety Institute published initial findings from its research into advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://www.theguardian.com/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2024
Reply 0 comments