Article 4F6ZJ Alex Stamos on the security problems of the platforms' content moderation, and what to do about them

Alex Stamos on the security problems of the platforms' content moderation, and what to do about them

by
Cory Doctorow
from on (#4F6ZJ)

Alex Stamos (previously) is the former Chief Security Officer of Yahoo and Facebook. I've jokingly called him a "human warrant canary" because it seems that whenever he leaves a job, we later learn that his departure was precipitated by some terrible compromise the company was making -- he says that he prefers to be thought of as "the Forrest Gump of infosec" because whenever there is a terrible geopolitical information warfare crisis, he's in the background, playing ping-pong.

After departing Facebook, Stamos started as new phase of his career as an academic in Stanford's information warfare group, and in that capacity, he recently presented at UC Berkeley's School of Information with a talk called "The Platform Challenge: Balancing Safety, Privacy and Freedom" at the schoo's Dataedge 2019 conference.

The talk is an absolute must-watch: Stamos describes the crisis that giant platforms face in trying to balance out anti-abuse (which benefits from spying on users) with privacy and compliance with government regulation, and how they game those contradictions to let them do a terrible job on every front while sidestepping blame.

Stamos reveals the internal debate about moderation and bad speech -- harassment, extremist recruiting, disinformation -- at the platforms, and blames the platforms' unwillingness to make this dialog public for their crisis of credibility.

Stamos also identifies bigness as a source of the problem, making an analogy between Microsoft's security crisis in the 1990s and Facebook's crisis of today. Stamos says that Microsoft's security was terrible, but not necessarily worse than anyone else's. However, thanks to Microsoft's dominance, their stupid mistakes had much worse, wider-ranging consequences than the mistakes of their smaller competitors. Facebook may not have worse moderation policies than their competitors, but their mistakes are much worse, because they affect up to 2.3 billion Facebook users.

But Stamos stops short of recommending that Facebook be broken up, despite mounting support for this proposition. Instead, Stamos argues that Microsoft's dominance gave it the capital needed for a complete top-to-bottom revamp of its security practices, which paid off and percolated through the industry. In Stamos's account, this was only possible because of the near-total power wielded by founder Bill Gates at the time, so when Gates made security a priority, the company responded. The implication is that the attempts to oust Zuckerberg are misguided, and that Facebook's salvation may be a radical retooling of the sort that can only be undertaken under the kind of unilateral dictatorship that Zuck enjoys at his company.

To Stamos's credit, he does delve into some specifics about how Facebook might revamp its offerings to achieve a better balance, including a very fascinating proposal to allow for anti-harassment tools to work in encrypted messaging tools without compromising the users' privacy. Some of these proposals, though, are pretty alarming, like having the platforms collude on a blacklist of "bad sites" that none of their users are allowed to link to.

All that said, this is a completely novel and important contribution to the debate over content moderation at scale, and it's arrived at a moment when the platforms are under unprecedented pressure to block all kinds of content. It's an hour long video, but that's an hour very well spent.

(via Four Short Links)

External Content
Source RSS or Atom Feed
Feed Location https://boingboing.net/feed
Feed Title
Feed Link https://boingboing.net/
Reply 0 comments