Article 6K2D5 Researchers Create AI Worms That Can Spread From One System to Another

Researchers Create AI Worms That Can Spread From One System to Another

by
EditorDavid
from Slashdot on (#6K2D5)
Long-time Slashdot reader Greymane shared this article from Wired:[I]n a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms - which can spread from one system to another, potentially stealing data or deploying malware in the process. "It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before," says Ben Nassi, a Cornell Tech researcher behind the research. Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages - breaking some security protections in ChatGPT and Gemini in the process...in test environments [and not against a publicly available email assistant]... To create the generative AI worm, the researchers turned to a so-called "adversarial self-replicating prompt." This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies... To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system - by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file. In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which "poisons" the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it "jailbreaks the GenAI service" and ultimately steals data from the emails, Nassi says. "The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client," Nassi says. In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. "By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent," Nassi says. In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. "It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential," Nassi says. The researchers reported their findings to Google and OpenAI, according to the article, with OpenAI confirming "They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered." OpenAI says they're now working to make their systems "more resilient." Google declined to comment on the research.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright 1997-2016, SlashdotMedia. All Rights Reserved.
Reply 0 comments