An (Im)perfect Way Forward On Infrastructure Moderation?

Within every conversation about technology lies the moral question: is a technology good or bad? Or, is it neutral? In other words, are our values part of the technologies we create or is technology valueless until someone decides what to do?
This is the kind of dilemma Cloudflare, the Internet infrastructure company, found itself in earlier this year. Following increasing pressure to drop KiwiFarms, a troll site targeting women and minorities, especially, LGBTQ people, Cloudflare's CEO, Matthew Prince, and Alissa Starzak, its VP for Public Policy, posted a note stating that the power to terminate security services for the sites was not a power Cloudflare should hold". Clouldflare was the provider of such security services to KiwiFarms.
Cloudflare's position was impossible. On the one hand, Cloudflare, as an infrastructure provider, should not be making any content moderation decisions; on the other, KiwiFarm's existence was putting the lives of people in danger. Although Cloudflare is not like the fire department" as it claims (fire departments are essential for the societies to function and feel safe; Cloudflare is not essential for the functioning of the internet, though it does make it more secure), still moving content moderation down the internet stack can have a chilling effect on speech and the internet. At the end of the day, it is services, like Cloudflare's, which get to determine who is visible in the internet.
Cloudflare ended up terminating KiwiFarms as a customer even though originally it said it wouldn't. In a way, Cloudflare's decision to reverse its own intention, placed content moderation at the infrastructure level front and center once again. Now though, it feels like we are running out of time; I am not sure how much more of such unpredictability and inconsistency can be tolerated before regulators step in.
Personally, the idea of content moderation at the infrastructure level makes me uncomfortable, especially because content moderation will move somewhere that is invisible to most. Fundamentally, I still believe that moving content moderation down at the infrastructure level is dangerous in terms of scale and impact. The Internet should remain agnostic of the data that moves around it and anyone who facilitates this movement should adhere to this principle. At least, this must be the rule. I don't think this will be the priority in any potential regulation.
However, there is another reality that I've grown into: decisions, like the one Cloudflare was asked to make, have real consequences to real people. In cases like KiwiFarms inaction feels like aiding and abetting. If there is something that someone can do to prevent such reprehensible activity, shouldn't they just go ahead, and do it?
That something will be difficult to accept. If content moderation is messy and complex for Facebook and Twitter, imagine for companies like Cloudflare and AWS. The same problems with speech, human rights and transparency will exist at the infrastructure level; just multiply them by a million. To be fair, infrastructure providers already engage in removal of websites and services in the internet. And, they have policies to do that. Cloudflare said so: Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don't make news. Most of the time these decisions don't conflict with our moral views." Not all infrastructure providers have policies though and, in general, decisions about content removal taking place at the infrastructure level are opaque.
KiwiFarms will happen again. It might not be called that, but it's a matter of time before a similarly disgusting case pops up. We need a way forward and fast.
So, here's a thought: an Oversight Board-type" of body for infrastructure. This body - let's call it Infrastructure Appeals Panel" - will be funded by as many infrastructure providers as possible and its role will be to scrutinize decisions infrastructure providers make regarding content. The Panel will need to have a clear mandate and scope and be global, which is important as the decisions made by infrastructure providers affect both issues of speech and the Internet. Its rules must be written by infrastructure providers and users, which is perhaps the single most difficult thing. As Evelyn Douek said writing speech rules is hard"; it becomes even harder if one considers the possible chilling effect. And, this whole exercise becomes even more difficult if you need to add rules about the impact on the internet. Unlike the decisions social media companies make every day, decisions made at the infrastructure of the internet can also create unintended consequences to the way it operates.
Building such an external body is not easy and, many things can go wrong. Finding the right answers to questions regarding board member selection, independence, process and values becomes key for its success. And, although such systems can be arbitrary and abused, history shows they can also be effective. In the Middle Ages, for instance, at the time international trade was shaping, itinerant merchants sought to establish a system of adjudication, detached from local sovereign law and able to govern the practices and norms that were emerging at the time. The system of lex mercatoria originated from the need to structure a system that would be efficient in addressing the needs of merchants and, produce decisions that would carry value equivalent to the decisions reached through traditional means. Currently, content moderation at the infrastructure is an unchecked system, where players can exercise arbitrary power, which is further exacerbated by the lack of interest or understanding at what is happening at that level.
Most likely, this idea will not be enough to address all the content moderation issues at the infrastructure level. Additionally, if it is going to have any real chance of being useful, the Panel's design, structure, and implementation as well as its legitimacy must be considered a priority. An external panel that is not scoped appropriately or does not have any authority, risks creating false accountability; the result is that policy makers get distracted while systemic issues persist. Lessons can be learned from the similar exercise of creating the Oversight Board.
The last immediate thing is for this Panel not to be seen as the answer to issues of speech or infrastructure. We should continue to discuss ways of addressing content moderation at the infrastructure level and try to institute the necessary safeguards and reforms on what is the best way to moderate content. There is never going to be a way to create fully consistent policies or agree on a set of norms. But, through transparency, which such a panel can provide, we can reach a state where the conversation becomes more focused and driven more by facts and less by emotions.
Konstantinos Komaitis is an internet policy expert and author. His website is at komaitis.org.