This group of tech firms just signed up to a safer metaverse
The internet can feel like a bottomless pit of the worst aspects of humanity. So far, there's little indication that the metaverse-an envisioned virtual digital world where we work, play, and live-will be much better. As I reported last month, a beta tester in Meta's virtual social platform, Horizon Worlds, has already complained of being groped.
Tiffany Xingyu Wang feels she has a solution. In August 2020-more than a year before Facebook announced it would change its name to Meta and shift its focus from its flagship social media platform to plans for its own metaverse-Wang launched the nonprofit Oasis Consortium, a group of game firms and online companies that envisions an ethical internet where future generations trust they can interact, co-create, and exist free from online hate and toxicity."
How? Wang thinks that Oasis can ensure a safer, better metaverse by helping tech companies self-regulate.
Earlier this month, Oasis released its User Safety Standards, a set of guidelines that include hiring a trust and safety officer, employing content moderation, and integrating the latest research in fighting toxicity. Companies that join the consortium pledge to work toward these goals.
I want to give the web and metaverse a new option," says Wang, who has spent the past 15 years working in AI and content moderation. If the metaverse is going to survive, it has to have safety in it."
She's right: the technology's success is tied to its ability to ensure that users don't get hurt. But can we really trust that Silicon Valley's companies will be able to regulate themselves in the metaverse?
A blueprint for a safer metaverseThe companies that have signed on to Oasis thus far include gaming platform Roblox, dating company Grindr, and video game giant Riot Games, among others. Between them they have hundreds of millions of users, many of whom are already actively using virtual spaces.
Notably, however, Wang hasn't yet talked with Meta, arguably the biggest player in the future metaverse. Her strategy is to approach Big Tech when they see the meaningful changes we're making at the forefront of the movement." (Meta pointed me to two documents when asked about its plans for safety in the metaverse: a press release detailing partnerships with groups and individuals for building the metaverse responsibly," and a blog post about keeping VR spaces safe. Both were written by Meta CTO Andrew Bosworth.)
To support MIT Technology Review's journalism, please consider becoming a subscriber.
Wang says she hopes to ensure transparency in a few ways. One is by creating a grading system to ensure that the public knows where a company stands in maintaining trust and safety, not unlike the system by which many restaurants showcase city grades for meeting health and cleanliness standards. Another is by requiring member companies to employ a trust and safety officer. This position has become increasingly common in larger firms, but there's no agreed set of standards by which each trust and safety officer must abide, Wang says.
But much of Oasis's plan remains, at best, idealistic. One example is a proposal to use machine learning to detect harassment and hate speech. As my colleague Karen Hao reported last year, AI models either give hate speech too much chance to spread or overstep. Still, Wang defends Oasis's promotion of AI as a moderating tool. AI is as good as the data gets," she says. Platforms share different moderation practices, but all work toward better accuracies, faster reaction, and safety by design prevention."
The document itself is seven pages long and outlines future goals for the consortium. Much of it reads like a mission statement, and Wang says that the first several months' work have centered on creating advisory groups to help create the goals.
Other elements of the plan, such as its content moderation strategy, are vague. Wang says she would like companies to hire a diverse set of content moderators so they can understand and combat harassment of people of color and those who identify as non-male. But the plan offers no further steps toward achieving this goal.
In accordance with privacy laws, the consortium will not expect member companies to share data on which users are being abusive, making it difficult to identify repeat offenders across platforms. Participating tech companies will partner with nonprofits, government agencies, and law enforcement to help create safety policies, Wang says. She also plans for companies that participate with Oasis to have a law enforcement response team, whose job it will be to notify police about harassment and abuse. But it remains unclear how the task force's work with law enforcement will differ from the status quo.
Balancing privacy and safetyDespite the lack of concrete details, experts I spoke to think that the consortium's standards document is a good first step, at least. It's a good thing that Oasis is looking at self-regulation, starting with the people who know the systems and their limitations," says Brittan Heller, a lawyer specializing in technology and human rights.
It's not the first time tech companies have worked together in this way. In 2017, some agreed to exchange information freely with the Global Internet Forum to Combat Terrorism. Today, GIFCT remains independent, and companies that sign on to it self-regulate.
Lucy Sparrow, a researcher at the School of Computing and Information Systems at the University of Melbourne, says that what's going for Oasis is that it offers companies something to work with, rather than waiting for them to come up with the language themselves or wait for a third party to do that work.
Sparrow adds that baking ethics into design from the start, as Oasis pushes for, is admirable and that her research in multiplayer game systems shows it makes a difference. Ethics tends to get pushed to the sidelines, but here, they [Oasis] are encouraging thinking about ethics from the beginning," she says.
But Heller says that ethical design might not be enough. She suggests that tech companies retool their terms of service, which have been criticized heavily for taking advantage of consumers without legal expertise.
Sparrow agrees, saying she's hesitant to believe that a group of tech companies will act in consumers' best interest. It really raises two questions," she says. One, how much do we trust capital-driven corporations to control safety? And two, how much control do we want tech companies to have over our virtual lives?"
It's a sticky situation, especially because users have a right to both safety and privacy, but those needs can be in tension.
For example, Oasis's standards include guidelines for lodging complaints with law enforcement if users are harassed. If a person wants to file a report now, it's often hard to do so, because for privacy reasons, platforms often aren't recording what's going on.
This change would make a big difference in the ability to discipline repeat offenders; right now, they can get away with abuse and harassment on multiple platforms, because those platforms aren't communicating with each other about which users are problematic. Yet Heller says that while this is a great idea in theory, it's hard to put in practice, because companies are obliged to keep user information private according to the terms of service.
How can you anonymize this data and still have the sharing be effective?" she asks. What would be the threshold for having your data shared? How could you make the process of sharing information transparent and user removals appealable? Who would have the authority to make such decisions?"
There is no precedent for companies sharing information [with other companies] about users who violate terms of service for harassment or similar bad behavior, even though this often crosses platform lines," she adds.
Better content moderation-by humans-could stop harassment at the source. Yet Heller isn't clear on how Oasis plans to standardize content moderation, especially between a text-based medium and one that is more virtual. And moderating in the metaverse will come with its own set of challenges.
The AI-based content moderation in social media feeds that catches hate speech is primarily text-based," Heller says. Content moderation in VR will need to primarily track and monitor behavior-and current XR [virtual and augmented reality] reporting mechanisms are janky, at best, and often ineffective. It can't be automated by AI at this point."
That puts the burden of reporting abuse on the user-as the Meta groping victim experienced. Audio and video are often also not recorded, making it harder to establish proof of an assault. Even among those platforms recording audio, Heller says, most retain only snippets, making context difficult if not impossible to understand.
Wang emphasized that the User Safety Standards were created by a safety advisory board, but they are all members of the consortium-a fact that made Heller and Sparrow queasy. The truth is, companies have never had a great track record for protecting consumer health and safety in the history of the internet; why should we expect anything different now?
Sparrow doesn't think we can. The point is to have a system in place so justice can be enacted or signal what kind of behaviors are expected, and there are consequences for those behaviors that are out of line," she says. That might mean having other stakeholders and everyday citizens involved, or some kind of participatory governance that allows users to testify and act as a jury.
One thing's for sure, though: safety in the metaverse might take more than a group of tech companies promising to watch out for us.
Editors note: This article was corrected to more accurately reflect how Oasis would share data on abusive actors and its plans for law enforcement.