Article 6BAY3 Undercover in the metaverse

Undercover in the metaverse

by
Tate Ryan-Mosley
from MIT Technology Review on (#6BAY3)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

I recently published a story about a new kind of job that's becoming essential at the frontier of the internet: the role of metaverse content cop. Content moderators in the metaverse go undercover into 3D worlds through a VR headset and interact with users to catch bad behavior in real time. It all sounds like a movie, and in some ways it literally is. But despite looking like a cartoon world, the metaverse is populated by very real people who can do bad things that have to be caught in the moment.

I chatted with Ravi Yekkanti, who works for a third-party content moderation company called WebPurify that provides services to metaverse companies. Ravi moderates these environments and trains others to do the same. He told me he runs into bad behavior every day, but he loves his job and takes pride in how important it is. We get into how his job works in my story this week, but there was so much more fascinating detail to our conversation than I could get into in that format, and I wanted to share the rest of it with you here.

Here's what Ravi had to say, in his own words:

How did you get into this work? What drew you to the job?

I started working in this field in 2014. By now I've looked at more than a billion pieces of content like texts, images, and videos. Since day one, I always loved what I did. That's weird coming from someone who is working in moderation, but I started in the field by working on reviews of movies, books, and music. It was like an extension of my hobbies.

How does VR content moderation differ from the other type of content moderation work you've done in the past?

The major difference is the experience. VR moderation feels so real. I have reviewed a lot of content, but this is definitely different because you are actually moderating the behavior.

And you are also part of it, so what you do and who you are can trigger bad behavior in another player. I'm Indian with an accent, and this can trigger some kind of bullying behavior from other players. They might come to me, say something nasty, and try to taunt me or bully me based on my ethnicity.

We do not reveal, of course, that we are moderators. We have to maintain our cover because that might make them cautious or something.

When you first stepped into VR to moderate, was it scary at all?

Yeah, it definitely feels different. When I put on the VR headset for the very first time in my life, I was awestruck. I had no words to explain the experience. It felt so good. When I started doing moderation in VR and trying out games with other players, it was a little intimidating. It could be because of the language difference, or it could be because you are conscious that you're meeting people who you've never met from all over the world. There is also no such thing as my personal space.

How do you prepare to moderate the metaverse? What are you training a new team member to do?

First, we prepare technically. So we go over our policy to be undercover and act as hosts in the game. We are expected to start conversations, ask other players if they are having a good time, and teach them how to play the game.

The second aspect of preparation is related to mental health. Not all players behave the way you want them to behave. Sometimes people come just to be nasty. We prepare by going over different kinds of scenarios that you can come across and how to best handle them.

We also track everything. We track what game we are playing, what players joined the game, what time we started the game, what time we are ending the game. What was the conversation about during the game? Is the player using bad language? Is the player being abusive?

Sometimes we find behavior that is borderline, like someone using a bad word out of frustration. We still track it, because there might be children on the platform. And sometimes the behavior exceeds a certain limit, like if it is becoming too personal, and we have more options for that.

If somebody says something really racist, for example, what are you trained to do?

Well, we create a weekly report based on our tracking and submit it to the client. Depending on the repetition of bad behavior from a player, the client might decide to take some action.

And if the behavior is very bad in real time and breaks the policy guidelines, we have different controls to use. We can mute the player so that no one can hear what he's saying. We can even kick the player out of the game and report the player [to the client] with a recording of what happened.

What do you think is something people don't know about this space that they should?

It's so fun. I still remember that feeling of the first time I put on the VR headset. Not all jobs allow you to play.

And I want everyone to know that it is important. Once, I was reviewing text [not in the metaverse] and got this review from a child that said, So-and-so person kidnapped me and hid me in the basement. My phone is about to die. Someone please call 911. And he's coming, please help me.

I was skeptical about it. What should I do with it? This is not a platform to ask help. I sent it to our legal team anyway, and the police went to the location. We got feedback a couple of months later that when police went to that location, they found the boy tied up in the basement with bruises all over his body.

That was a life-changing moment for me personally, because I always thought that this job was just a buffer, something you do before you figure out what you actually want to do. And that's how most of the people treat this job. But that incident changed my life and made me understand that what I do here actually impacts the real world. I mean, I literally saved a kid. Our team literally saved a kid, and we are all proud. That day, I decided that I should stay in the field and make sure everyone realizes that this is really important.

What I am reading this week
  • Analytics company Palantir has built an AI platform meant to help the military make strategic decisions through a chatbot akin to ChatGPT that can analyze satellite imagery and generate plans of attack. The company has promised it will be done ethically, though ...
  • Twitter's blue-check meltdown is starting to have real-world implications, making it difficult to know what and who to believe on the platform. Misinformation is flourishing-within 24 hours after Twitter removed the previously verified blue checks, at least 11 new accounts began impersonating the Los Angeles Police Department, reports the New York Times.
  • Russia's war on Ukraine turbocharged the downfall of its tech industry, Masha Borak wrote in this great feature for MIT Technology Review published a few weeks ago. The Kremlin's push to regulate and control the information on Yandex suffocated the search engine.
What I learned this week

When users report misinformation online, it may be more useful than previously thought. A new study published in Stanford's Journal of Online Trust and Safety showed that user reports of false news on Facebook and Instagram could be fairly accurate in combating misinformation when sorted by certain characteristics like the type of feedback or content. The study, the first of its kind to quantitatively assess the veracity of user reports of misinformation, signals some optimism that crowdsourced content moderation can be effective.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments