Article 6G6SD Xbox moderation team turns to AI for help filtering a flood of user content

Xbox moderation team turns to AI for help filtering a flood of user content

by
Kyle Orland
from Ars Technica - All content on (#6G6SD)
xbox-trolls.jpg

Artist interpretation of the creatures talking about your mom on Xbox Live last night. (credit: Aurich Lawson / Thinkstock)

Anyone who has worked in community moderation knows that finding and removing bad content becomes exponentially tougher as a communications platform reaches into the millions of daily users. To help with that problem, Microsoft says it's turning to AI tools to help "accelerate" its Xbox moderation efforts, letting these systems automatically flag content for human review without needing a player report.

Microsoft's latest Xbox transparency report-the company's third public look at enforcement of its community standards-is the first to include a section on "advancing content moderation and platform safety with AI." And that report specifically calls out two tools that the company says "enable us to achieve greater scale, elevate the capabilities of our human moderators, and reduce exposure to sensitive content."

Microsoft says many of its Xbox safety systems are now powered by Community Sift, a moderation tool created by Microsoft subsidiary TwoHat. Among the "billions of human interactions" the Community Sift system has filtered this year are "over 36 million" Xbox player reports in 22 languages, according to the Microsoft report. The Community Sift system evaluates those player reports to see which ones need further attention from a human moderator.

Read 9 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments