Article 6K9B0 Why we need better defenses against VR cyberattacks

Why we need better defenses against VR cyberattacks

by
Melissa Heikkilä
from MIT Technology Review on (#6K9B0)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I remember the first time I tried on a VR headset. It was the first Oculus Rift, and I nearly fainted after experiencing an intense but visually clumsy VR roller-coaster. But that was a decade ago, and the experience has gotten a lot smoother and more realistic since.That impressive level of immersiveness could be a problem, though: it makes us particularly vulnerable to cyberattacks in VR.

I justpublished a storyabout a new kind of security vulnerability discovered by researchers at the University of Chicago. Inspired by the Christoper Nolan movieInception, the attack allows hackers to create an app that injects malicious code into the Meta Quest VR system. Then it launches a clone of the home screen and apps that looks identical to the user's original screen. Once inside, attackers are able to see, record, and modify everything the person does with the VR headset, tracking voice, motion, gestures, keystrokes, browsing activity, and even interactions with other people in real time. New fear = unlocked.

The findings are pretty mind-bending, in part because the researchers' unsuspecting test subjects had absolutely no idea they were under attack.You can read more about it in my story here.

It's shocking to see how fragile and unsecure these VR systems are, especially considering that Meta's Quest headset is the most popular such product on the market, used by millions of people.

But perhaps more unsettling is how attacks like this can happen without our noticing, and can warp our sense of reality. Past studies have shown how quickly people start treating things in AR or VR as real, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study. Even in very basic virtual environments, people start stepping around objects as if they were really there.

VR has the potential to put misinformation, deception and other problematic content on steroids because it exploits people's brains, and deceives them physiologically and subconsciously, says Roesner: The immersion is really powerful."

And because VR technology is relatively new, people aren't vigilantly looking out for security flaws or traps while using it. To test how stealthy the inception attack was, the University of Chicago researchers recruited 27 volunteer VR experts to experience it. One of the participants was Jasmine Lu, a computer science PhD researcher at the University of Chicago. She says she has been using, studying, and working with VR systems regularly since 2017. Despite that, the attack took her and almost all the other participants by surprise.

As far as I could tell, there was not any difference except a bit of a slower loading time-things that I think most people would just translate as small glitches in the system," says Lu.

One of the fundamental issues people may have to deal with in using VR is whether they can trust what they're seeing, says Roesner.

Lu agrees. She says that with online browsers, we have been trained to recognize what looks legitimate and what doesn't, but with VR, we simply haven't. People do not know what an attack looks like.

This is related to a growing problem we're seeing with the rise of generative AI, and even with text, audio, and video: it isnotoriously difficult to distinguish real from AI-generated content. The inception attack shows that we need to think of VR as another dimension in a world where it's getting increasingly difficult to know what's real and what's not.

As more people use these systems, and more products enter the market, the onus is on the tech sector to develop ways to make them more secure and trustworthy.

The good news? While VR technologies are commercially available, they're not all that widely used, says Roesner. So there's time to start beefing up defenses now.

Now read the rest of The AlgorithmDeeper Learning

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence. Now three of OpenAI's early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

Multimodal prompting:The new model, called RFM-1, was trained on years of data collected from Covariant's small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses around the world, as well as words and videos from the internet. Users can prompt the model using five different types of input: text, images, video, robot instructions, and measurements. The company hopes the system will become more capable and efficient as it's deployed in the real world.Read more from James O'Donnell here.

Bits and Bytes

You can now usegenerative AI to turn your stories into comics
By pulling together several different generative models into an easy-to-use package controlled with the push of a button, Lore Machine heralds the arrival of one-click AI. (MIT Technology Review)

A former Google engineer has been charged with stealing AI trade secrets for Chinese companies
The race to develop ever more powerful AI systems is becoming dirty. A Chinese engineer downloaded confidential files about Google's supercomputing data centers to his personal Google Cloud account while working for Chinese companies. (US Department of Justice)

There's been even more drama in the OpenAI saga
This story truly is the gift that keeps on giving. OpenAI has clapped back at Elon Musk and his lawsuit, which claims the company has betrayed its original mission of doing good for the world, bypublishing emailsshowing that Musk was keen to commercialize OpenAI too. Meanwhile, Sam Altman isback on the OpenAI boardafter his temporary ouster, and it turns out that chief technology officer Mira Muratiplayed a bigger rolein the coup against Altman than initially reported.

A Microsoft whistleblower has warned that the company's AI tool creates violent and sexual images, and ignores copyright
Shane Jones, an engineer who works at Microsoft, says his tests with the company's Copilot Designer gave him concerning and disturbing results. He says the company acknowledged his concerns, but it did not take the product off the market. Jones then sent a letter explaining these concerns to the Federal Trade Commission, and Microsoft has since startedblocking some termsthat generated toxic content. (CNBC)

Silicon Valley is pricing academics out of AI research
AI research is eye-wateringly expensive, and Big Tech, with its huge salaries and computing resources, is draining academia of top talent. This has serious implications for the technology, causing it to be focused on commercial uses over science. (The Washington Post)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments