Can Tech Firms Prevent Violent Videos Circulating on the Internet?
This week New York's attorney general announced they're officially "launching investigations into the social media companies that the Buffalo shooter used to plan, promote, and stream his terror attack." Slashdot reader echo123 points out that Discord confirmed that roughly 30 minutes before the attack a "small group" was invited to join the shooter's server. "None of the people he invited to review his writings appeared to have alerted law enforcement," reports the New York Times., "and the massacre played out much as envisioned." But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand."For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook - one of dozens that are online, even after years of work to remove them - may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate. In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring - and potentially everlasting - afterlife on the internet. "It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019.... Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that." But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it."Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded... Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly. So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution. It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint. "These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
Read more of this story at Slashdot.