Facebook AI moderator confused videos of mass shootings and car washes
Enlarge / Facebook CEO Mark Zuckerberg testifying before Congress in April 2018. It wasn't his only appearance in DC this decade. (credit: Bloomberg | Getty Images)
Facebook CEO Mark Zuckerberg sounded an optimistic note three years ago when he wrote about the progress his company was making in automated moderation tools powered by artificial intelligence. Through the end of 2019, we expect to have trained our systems to proactively detect the vast majority of problematic content," he wrote in November 2018.
But as recently as March, internal Facebook documents reveal the company found its automated moderation tools were falling far short, removing posts that were responsible for only a small fraction of views of hate speech and violence and incitement on the platform. The posts removed by AI tools only accounted for 3-5 percent of views of hate speech and 0.6 percent of views of violence and incitement.
While that's up from 2 percent of hate speech views two years ago, according to documents turned over to The Wall Street Journal by whistleblower Frances Haugen, it's far from a vast majority. One of the company's senior engineers wrote in 2019 that he felt the company could improve by an order of magnitude but that they might then hit a ceiling beyond which further advances would be difficult.
Read 14 remaining paragraphs | Comments