Article 5HH83 AI consumes a lot of energy. Hackers could make it consume more.

AI consumes a lot of energy. Hackers could make it consume more.

by
Karen Hao
from MIT Technology Review on (#5HH83)

The news: A new type of attack could increase the energy consumption of AI systems. In the same way a denial-of-service attack on the internet seeks to clog up a network and make it unusable, the new attack forces a deep neural network to tie up more computational resources than necessary and slow down its thinking" process.

The target: In recent years, growing concern over the costly energy consumption of large AI models has led researchers to design more efficient neural networks. One category, known as input-adaptive multi-exit architectures, works by splitting up tasks according to how hard they are to solve. It then spends the minimum amount of computational resources needed to solve each.

Say you have a picture of a lion looking straight at the camera with perfect lighting and a picture of a lion crouching in a complex landscape, partly hidden from view. A traditional neural network would pass both photos through all of its layers and spend the same amount of computation to label each. But an input-adaptive multi-exit neural network might pass the first photo through just one layer before reaching the necessary threshold of confidence to call it what it is. This shrinks the model's carbon footprint-but it also improves its speed and allows it to be deployed on small devices like smartphones and smart speakers.

The attack: But this kind of neural network means if you change the input, such as the image it's fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network's inputs, they made it perceive the inputs as more difficult and jack up its computation.

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network's processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren't yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitra, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: What's important to me is to bring to people's attention the fact that this is a new threat model, and these kinds of attacks can be done."

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments