Facebook AI catches 95% of hate speech; company still wants mods back in office
Enlarge / Facebook's Menlo Park, California, headquarters as seen in 2017. (credit: Jason Doiy | Getty Images)
Facebook's software systems get ever better at detecting and blocking hate speech on both the Facebook and Instagram platforms, the company boasted today-but the hardest work still has to be done by people, and many of those people warn that the world's biggest social media company is putting them in unsafe working conditions.
About 95 percent of hate speech on Facebook gets caught by algorithms before anyone can report it, Facebook said in its latest community-standards enforcement report. The remaining 5 percent of the roughly 22 million flagged posts in the past quarter were reported by users.
That report is also tracking a new hate-speech metric: prevalence. Basically, to measure prevalence, Facebook takes a sample of content and then looks for how often the thing they're measuring-in this case, hate speech-gets seen as a percentage of viewed content. Between July and September of this year, the figure was between 0.10 percent and 0.11 percent, or about 10-11 views of every 10,000.
Read 12 remaining paragraphs | Comments