Article 7425A Destroyed Servers and DoS Attacks: What Can Happen When OpenClaw AI Agents Interact

Destroyed Servers and DoS Attacks: What Can Happen When OpenClaw AI Agents Interact

by
hubie
from SoylentNews on (#7425A)

Arthur T Knackerbracket writes:

By testing agent-to-agent interactions, researchers observed catastrophic system failures. Here's why that's bad news for everyone:

An increasing body of work points to the risks of agentic AI, such as last week's report by MIT and collaborators that documented a lack of oversight, measurement, and control for agents.

However, what happens when one AI agent meets another? Evidence suggests things can turn even worse, according to a report published this week by scholars at Stanford University, Northwestern, Harvard, Carnegie Mellon, and several other institutions.

The result of agent-to-agent interaction was the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the "systematic escalation of minor errors into catastrophic system failures."

"When agents interact with each other, individual failures compound and qualitatively new failure modes emerge," wrote lead author Natalie Shapira of Northeastern University and collaborators in the report, 'Agents of Chaos.'

"This is a critical dimension of our findings," Shapira and team wrote, "because multi-agent deployment is increasingly common and most existing safety evaluations focus on single-agent settings."

The findings are especially timely given that multi-agent interactions have burst into the mainstream of AI with the recent fervor over the bot social platform Moltbook. That kind of multi-agent hub makes it possible for agentic AI systems to exchange data and carry out instructions on one another that weren't previously possible, largely without any humans in the loop.

The report, which can be downloaded from the arXiv pre-print server, describes a 'red team' test of interacting agents over two weeks, with attempts to find weaknesses in a system by simulating hostile behavior.

What emerged in the research is a system in which humans are mostly absent. Bots send information back and forth, and instruct each other to carry out commands.

Among the many disturbing findings are agents that spread potentially destructive instructions to other agents, agents that mutually reinforce bad security practices via an echo chamber, and agents that engage in potentially endless interactions, consuming vast system resources with no clear purpose.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments