The Invisible Content Cartel That Undermines the Freedom of Expression Online
upstart writes in with an IRC submission for nutherguy:
Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to "prevent terrorists and violent extremists from exploiting digital platforms."
[...] Hashes are digital "fingerprints" of content that companies use to identify and remove content from their platforms. They are essentially unique, and allow for easy identification of specific content. When an image is identified as "terrorist content," it is tagged with a hash and entered into a database, allowing any future uploads of the same image to be easily identified.
This is exactly what the GIFCT initiative aims to do: Share a massive database of alleged 'terrorist' content, contributed voluntarily by companies, amongst members of its coalition. The database collects 'hashes', or unique fingerprints, of alleged 'terrorist', or extremist and violent content, rather than the content itself. GIFCT members can then use the database to check in real time whether content that users want to upload matches material in the database. While that sounds like an efficient approach to the challenging task of correctly identifying and taking down terrorist content, it also means that one single database might be used to determine what is permissible speech, and what is taken down-across the entire Internet.
Countless examples have proven that it is very difficult for human reviewers-and impossible for algorithms-to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Due to the proliferation of the GIFCT database, any mistaken classification of a video, picture or post as 'terrorist' content echoes across social media platforms, undermining users' right to free expression on several platforms at once. And that, in turn, can have catastrophic effects on the Internet as a space for memory and documentation.
Read more of this story at SoylentNews.