Moderating horror and hate on the web may be beyond even AI | John Naughton
Managing the barrage of upsetting material online is a challenge that service providers are struggling to meet, even if they try
Way back in the mid-1990s, when the web was young and the online world was buzzing with blogs, a worrying problem loomed. If you were an ISP that hosted blogs, and one of them contained material that was illegal or defamatory, you could be held legally responsible and sued into bankruptcy. Fearing that this would dramatically slow the expansion of a vital technology, two US lawmakers, Chris Cox and Ron Wyden, inserted 26 words into the Communications Decency Act of 1996, which eventually became section 230 of the Telecommunications Act of the same year. The words in question were: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." The implications were profound: from now on you bore no liability for content published on your platform.
The result was the exponential increase in user-generated content on the internet. The problem was that some of that content was vile, defamatory or downright horrible. Even if it was, though, the hosting site bore no liability for it. At times, some of that content caused public outrage to the point where it became a PR problem for the platforms hosting it and they began engaging in moderation".
Continue reading...