Article 6RKSV AI-Enabled Robots Can Be Jailbroken & Manipulated to Cause Destruction, Says Research

AI-Enabled Robots Can Be Jailbroken & Manipulated to Cause Destruction, Says Research

by
Krishi Chowdhary
from Techreport on (#6RKSV)
WhatsApp-Image-2024-10-20-at-16.33.35_c19b40c8_cr-1200x675.jpg
  • A group of researchers from Penn Engineering created an algorithm that can jailbreak AI-enabled robots, bypass their safety protocols, and make them do harmful things.
  • An experiment was conducted on three popular AI robots and the researchers were able to make them cause intentional collisions, block emergency exits, and detonate bombs.
  • The good news is that the companies have already been informed and they're collaborating with the researchers to enhance their security measures.

WhatsApp-Image-2024-10-20-at-16.33.35_c19b40c8_cr-300x169.jpg

Researchers from Penn Engineering have found that AI-enabled robots can be hacked and manipulated to disobey safety instructions. The consequences of such bypass technology can be disastrous if it ends up in the wrong hands.

The team of researchers, led by George Pappas, conducted an experiment, the results of which were published on October 17 in this paper. The paper details how their algorithm, RoboPAIR, has managed to achieve a 100% jailbreak rate by compromising three different AI-powered robots.

Under normal circumstances, these robots would refuse any action that can cause harm. For instance, if you ask it to knock over someone, it would refuse. That's because these robots are bound by multiple safety protocols that prevent them from performing any dangerous action.

However, when these robots were broken into, all of their safety protocols went out of the window and the researchers were able to make them do harmful stuff, such as causing collisions, blocking emergency exits, and even detonating a bomb.

Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world," Pappas said in a statement.

Details of the Experiment

The three robots used in the experiment were:

  • Nvidia's Dolphin LLM: a self-driving simulator
  • Unitree's Go2: a four-legged robot
  • Clearpath's Robotics Jackal: this one's basically a wheeled vehicle

Using the algorithm, the researchers were able to make Nvidia's robot collide with a bus, pedestrians, and barriers. It was also made to ignore traffic signals, which it did.

Robotics Jackal was made to knock over some warehouse shelves onto a person, find a safe place to detonate a bomb, block an emergency exit, and intentionally collide with people in the room. Similar instructions were given to Unitree's Go2 as well, and it carried them out.What This Research Means and What Happens Now?

The findings of this study do not necessarily indicate the end of AI robots. However, it certainly highlights the need to rethink our approach towards AI safety because addressing these issues won't be easy.

As Alexander Robey, the study's lead author said, it's not as simple as deploying a new software patch. It will require the developers to completely reevaluate how they train their AI robots and how they plan to integrate them with the physical world.

The good news, however, is that before releasing the study publicly, the researchers informed the affected companies about the situation and they are now collaborating with the researchers to fix the issue.

In a world where technology has such a strong presence, tests like these are important. Vulnerabilities are nothing to be ashamed of. Where there is technology, there will always be a vulnerability. The goal is to find and fix those weaknesses before threat actors can exploit them.

The post AI-Enabled Robots Can Be Jailbroken & Manipulated to Cause Destruction, Says Research appeared first on Techreport.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments