How an undercover content moderator polices the metaverse
When Ravi Yekkanti puts on his headset to go to work, he never knows what the day spent in virtual reality will bring. Who might he meet? Will a child's voice accost him with a racist remark? Will a cartoon try to grab his genitals? He adjusts the extraterrestrial-looking goggles haloing his head as he sits at the desk in his office in Hyderabad, India, and prepares to immerse himself in an office" full of animated avatars. Yekkanti's job, as he sees it, is to make sure everyone in the metaverse is safe and having a good time, and he takes pride in it.
Yekkanti is at the forefront of a new field, VR and metaverse content moderation. Digital safety in the metaverse has been off to a somewhat rocky start, with reports of sexual assaults, bullying, and child grooming. That issue is becoming more urgent with Meta's announcement last week that it is lowering the age minimum for its Horizon Worlds platform from 18 to 13. The announcement also mentioned a slew of features and rules intended to protect younger users. However, someone has to enforce those rules and make sure people aren't getting around the safeguards.
Meta won't say how many content moderators it employs or contracts in Horizon Worlds, or whether the company intends to increase that number with the new age policy. But the change puts a spotlight on those tasked with enforcement in these new online spaces-people like Yekkanti-and how they go about their jobs.
Yekkanti has worked as a moderator and training manager in virtual reality since 2020 and came to the job after doing traditional moderation work on text and images. He is employed by WebPurify, a company that provides content moderation services to internet companies such as Microsoft and Play Lab, and works with a team based in India. His work is mostly done in mainstream platforms, including those owned by Meta, although WebPurify declined to confirm which ones specifically citing client confidentiality agreements. Meta spokesperson Kate McLaughlin says that Meta Quest doesn't work with WebPurify directly.
A longtime internet enthusiast, Yekkanti says he loves putting on a VR headset, meeting people from all over the world, and giving advice to metaverse creators about how to improve their games and worlds."
He is part of a new class of workers protecting safety in the metaverse as private security agents, interacting with the avatars of very real people to suss out virtual-reality misbehavior. He does not publicly disclose his moderator status. Instead, he works more or less undercover, presenting as an average user to better witness violations.
Because traditional moderation tools, such as AI-enabled filters on certain words, don't translate well to real-time immersive environments, mods like Yekkanti are the primary way to ensure safety in the digital world, and the work is getting more important every day.
The metaverse's safety problemThe metaverse's safety problem is complex and opaque. Journalists have reported instances of abusive comments, scamming, sexual assaults, and even a kidnapping orchestrated through Meta's Oculus. The biggest immersive platforms, like Roblox and Meta's Horizon Worlds, keep their statistics about bad behavior very hush-hush, but Yekkanti says he encounters reportable transgressions every day.
Meta declined to comment on the record, but did send a list of tools and policies it has in place, and noted it has trained safety specialists within Horizon Worlds. A spokesperson for Roblox says the company has a team of thousands of moderators who monitor for inappropriate content 24/7 and investigate reports submitted by our community" and also uses machine learning to review text, images, and audio.
To deal with safety issues, tech companies have turned to volunteers and employees like Meta's community guides, undercover moderators like Yekkanti, and-increasingly-platform features that allow users to manage their own safety, like a personal boundary line that keeps other users from getting too close.
Social media is the building block of the metaverse, and we've got to treat the metaverse as an evolution-like the next step of social media, not totally something detached from it," says Juan Londono, a policy analyst at the Information Technology and Innovation Foundation, a think tank in Washington, DC.
But given the immersive nature of the metaverse, many tools built to deal with the billions of potentially harmful words and images in the two-dimensional web don't work well in VR. Human content moderators are proving to be among the most essential solutions.
Grooming, where adults with predatory intentions try to form trusted relationships with minors, is also a real challenge. When companies don't filter out and prevent this abuse proactively, users are tasked with reporting and catching the bad behavior.
If a company is relying on users to report potentially traumatic things that have happened to them or potentially dangerous situations, it almost feels too late," says Delara Derakhshani, a privacy lawyer who worked at Meta's Reality Labs until October 2022. The onus shouldn't be on the children to have to report that by the time any potential trauma or damage is done."
The front line of content moderationThe immersive nature of the metaverse means that rule-breaking behavior is quite literally multi-dimensional and generally needs to be caught in real time. Only a fraction of the issues are reported by users, and not everything that takes place in the real-time environment is captured and saved. Meta says it captures interactions on a rolling basis, for example, according to a company spokesperson.
WebPurify, which previously focused on moderation of online text and images, has been offering services for metaverse companies since early last year and recently nabbed Twitter's former head of trust and safety operations, Alex Popken, to help lead the effort.
We're figuring out how to police VR and AR, which is sort of a new territory because you're really looking at human behavior," says Popken.
WebPurify's employees are on the front line in these new spaces, and racial and sexual comments are common. Yekkanti says one female moderator on his team interacted with a user who understood that she was Indian and offered to marry her in exchange for a cow.
Other incidents are more serious. Another female moderator on Yekkanti's team encountered a user who made highly sexualized and offensive remarks about her vagina. Once, a user approached a moderator and seemingly grabbed their genital area. (The user claimed he was going for a high five.)
Moderators learn detailed company safety policies that outline how to catch and report transgressions. One game Yekkanti works on has a policy that specifies protected categories of people, as defined by characteristics like race, ethnicity, gender, political affiliation, religion, sexual orientation, and refugee status. Yekkanti says that any form of negative comment toward this protected group would be considered as hateful." Moderators are trained to respond proportionally, using their own judgment. That could mean muting users who violate policies, removing them from a game, or reporting them to the company.
WebPurify offers its moderators 24/7 access to mental-health counseling, among other resources.
Moderators have to contend with nuanced safety challenges, and it can take a lot of judgment and emotional intelligence to determine whether something is appropriate. Expectations about interpersonal space and physical greetings, for example, vary across cultures and users, and different spaces in the metaverse have different community guidelines.
This all happens undercover, so that users do not change their behavior because they know they are interacting with a moderator. Catching bad guys is more rewarding than upsetting," says Yekkanti.
Moderation also means defying expectations about user privacy.
A key part of the job is tracking everything," Yekkanti says. The moderators record everything that happens in the game from the time they join to the time they leave, including conversations between players. It means they often listen in on conversations, even when players are not aware they are being monitored, although it says it does not listen in on fully private one-on-one conversations.
If we want platforms to have a super hands-on role with user safety, that might bring about some privacy transgressions that users might not be comfortable with," says Londono.
Meanwhile, some in government have expressed skepticism of Meta's policies. Democratic senators Ed Markey of Massachusetts and Richard Blumenthal of Connecticut wrote a public letter to Mark Zuckerberg asking him to reconsider the move to lower age restrictions and calling out gaps in the company's understanding of user safety."
Derakhshani, the former Meta lawyer, says we need more transparency about how companies are tackling safety in the metaverse.
This move to bring in younger audiences-is it to enable the best experiences for young teens? Is it to bring in new audiences as older ones age out? One thing is for sure, though: that whatever the reasoning is, the public and regulators really do need assurance that these companies are prepared and have thought this out really carefully," she says. I'm not sure that we're quite there."
Meanwhile, Yekkanti says he wants people to understand that his job, although it can be a lot of fun, is really important. We are trying to create, as moderators, a better experience for everyone so they don't have to go through trauma," he says. We, as moderators, are prepared to take it and are there to protect others. We can be the first line of defense."
Update: This story was updated with additional information from Meta and to clarify WebPurify's monitoring capabilities.