![]() |
by Niels ten Oever on (#5Q7Z8)
Content moderation frameworks and toothless oversight boards legitimize the concentration of power in the hands of infrastructure providers and platforms. This gives them, and not democratic processes and structures, the discretion to egregiously shape the public debate.In 1964 Marshall McLuhan wrote that content is a “juicy piece of meat carried by the burglar to distract the watchdog of the mind” (McLuhan 2013). I will argue that today this is more true than ever. If we want to solve the issue of human rights violating content, we will need to look at the structures that allow for the production of it. Therefore, I will argue that “content” is a false category, and that infrastructure is often misunderstood as a largely material object whereas it is a complex assemblage of people, practices, institutions, cultures, and devices. To address the false premises on which the concept of “infrastructural content moderation” is based, I propose an analytical framework that does not separate the context from the content but rather offers an integrative approach to address online discourse production.Aristotle famously wrote that there is no matter without form and no form without matter. Similarly, Bergson said that color does not exist as an abstract category, but only as a quality of a substance. The same holds true for content on the Internet. A Facebook post is something different than a post on Tiktok, a blog post, a tweet, or a YouTube comment. One understands these messages differently. Just like one understands a sentence spoken in a comic club differently than one spoken in parliament, and a sentence uttered in a forest is different from one in a theater.It has taken centuries for legal and social rules for public and private spaces to develop. The Internet is a relatively new space that practically is largely private, but feels like the world's largest public space. It will take time for rules to sediment for this space. In the development of new rules, one should commence the interrogation of different possibilities with a simple question: Cui bono? Who profits?Julie Cohen describes in her book ‘Between Truth and Power’ that the shift in the image from the Internet as an “electronic superhighway” to a “cloud” should by no means be taken lightly. At least a highway has rules, a cloud has none. In the image that the Internet infrastructure industry has shown us, the Internet infrastructure is a given. A modular space on which things can be built, a neutral platform for economic growth and development, that would only suffer from regulation.But Keller Easterling explains that “infrastructure sets the invisible rules that govern the spaces of our everyday lives” and that “changes to the globalising world are being written, not in the language of law and diplomacy, but rather in the language of infrastructure.” She describes the practice of the development, implementation, and operation of these infrastructures as “extrastatecraft,” because these powers used to belong to nation states, but are now taken up by transnational corporations.The development, standardization, and implementation of Internet infrastructure is inherently political. Janet Abbate particularly says that: ‘the debate over network protocols illustrates how standards can be politics by other means. Denardis’ 2014 book ‘Protocol politics’ furthers the work by Abbate and showcases how “debates over protocols bring[ing] to light unspoken conflicts of interest.” Whereas the work of DeNardis focuses mostly on Internet protocols, she does emphasize that “politics are not external to technical architecture.”When one looks at the infrastructure that undergirds the exchange of discourse, we should not see it as a neutral foundation for platforms and services, but rather as a shaping force that has both direct and indirect power. This shaping power is what sets the rules for everything that happens on top of it, which is more influential than the haphazard removal of a particular user or group. This shaping power is deeply entrenched in the standardization and governance bodies where the Internet infrastructure is produced.Upon interrogation of these standards and governance bodies, one cannot help but notice, as the research by Corinne Cath-Speth shows, that the bodies can be characterized by a laissez-faire approach to technology development and defy any strong accountability measures. This culture is characterized by a libertarian, American, masculine approach that values individualism. It is exactly these qualities that perpetuate the idea that regulation will “break the Internet” and that individual choice and responsibility is the only way forward for the Internet infrastructure.This attitude is deeply ironic because for the first half of its existence the Internet was heavily funded by states, and the second half has been characterized by oligopolies. However, this sense of individual engineering pride keeps the status quo intact, which means a continuous exclusion of those who do not want to succumb to this culture, mostly women, people of color, and those from outside of Europe and the United States. This in turn strengthens a network topology that reinforces power structures of dominance and extraction based in the United States and Europe. Submarine cables now cover the whole world, but network traffic still largely centers in Europe and the United States, maps that very much resemble those of colonial trade routes.The Internet infrastructure and its standardization and governance regime exist to increase interconnection between transnational corporations, largely based in the United States and Europe. Expanding the data flows to and through these networks is what these networks and their governance is optimized for. This has transformed the Internet from a medium of connection to a medium of extraction. Solely focusing on the outgrowths of this culture and regime by focusing on content moderation would be naive at best, and legitimizing an extractive practice at worst.Reflections on the practice of content moderation should not solely focus on the content that should, or should not be, moderated, but rather on the structures that incentivize and perpetuate such speech. It is the responsibility of communication infrastructure providers to meaningfully engage with the human rights impact of their actions, and their chain responsibility. Thus far, hardly any Internet infrastructure provider has done so sufficiently. The industry’s lack of meaningful adoption and integration of the United Nations Guiding Principles for Business and Human Rights reminisce of the tobacco industry’s opposition against health codes, and their lobbying budgets reflect the same fear for regulation.Civil society should not be afraid to present strong alternative network ideologies that rely on free association and self-determination by end users. The priority of the networking and content provision industry should be to address problems of inequity and inequality, not to extract more private data to be sold to advertising and surveillance companies (which are anyhow based on flawed premises). The Internet is the public square of the world, we should better reimagine it as one. This means that the strongest actors should live up to their responsibilities, and not seek to wait for civil society to organize themselves and demand accountability, and fix their problems. Here we can only refer back to Spiderman: with great power, comes great responsibility. It is high time that the Internet infrastructure sector lives up to that.Niels ten Oever is a postdoctoral researcher with the ‘Making the hidden visible project at the Media Studies department at the University of Amsterdam.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-08-19 08:46 |
![]() |
by Mike Masnick on (#5Q7X3)
It's been a while since we've seen a really good response letter to a -- as Ken White likes to call them -- "bumptious" legal threat letter. But here we've got one, courtesy of Ken himself, representing Chad Loder. Loder is a writer who has been calling out propagandist Andy Ngo and The Post Millennial, a propagandist rag that Ngo sometimes writes for. The Post Millennial was apparently sad about that and sent Loder a very silly legal threat:
|
![]() |
by Daily Deal on (#5Q7X4)
Replace your old wireless router with the new Meshforce M3 Mesh Wi-Fi System. This flexible dual-band M3 system supports up to 60 devices. Its Wi-Fi coverage provides seamless connection for up to 4,500 sq.ft. - from your living room to your garage. Easy to expand the coverage by plugging more dots to enjoy a better Wi-Fi experience everywhere. This system supports up to 6 dots to build a Wi-Fi system for any home size. Use My mesh app and complete the setup of your mesh Wi-Fi System in less than 15 minutes. Manage your connections and guest network right on your iOS or Android mobile devices, at home or remotely, anytime and anyplace. Get one M3 hub and two M3 dots for $146. Use the coupon code VIP40 to get an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5Q7QJ)
Fifteen years ago, the best example of how out of touch elected officials were regarding the internet was Senator Ted Stevens' infamous "it's a series of tubes" speech (which started out "I just the other day got, an internet was sent by my staff at 10 o'clock in the morning on Friday and I just got it yesterday.") Over the years, this unwillingness of those who put themselves in the position to regulate the internet to actually bother to understand it has become something of an unfortunate running joke. A decade ago, in the midst of the fight over SOPA/PIPA, we pointed out that it's no longer okay for Congress to not know how the internet works. And yet, a decade has passed and things have not gotten much better. Senator Ron Johnson tried to compare the internet to a bridge into a small creek. Senator Orrin Hatch has no clue how Facebook makes money.And now there's a new addition to the list of examples of totally clueless Senators seeking to regulate something they clearly don't understand. This time it's Senator Richard Blumenthal, who has been grandstanding about how he wants to take on the internet since long before he was elected to the Senate. He created the most cringe-worthy media clip of a politician in a while while trying to press Facebook's head of safety Antigone Davis during a Senate hearing on "grandstanding about how we all hate Facebook" (not the actual subject matter, but close enough).
|
![]() |
by Karl Bode on (#5Q7EN)
We've noted for a while that the entirety of DC has a blind spot when it comes to discussing the U.S. broadband problem. As in, U.S. broadband is plagued by regional monopolies that literally pay Congress to pretend the problem isn't happening. That's not an opinion. U.S. broadband is slow, expensive, patchy, with terrible customer service due to two clear things: regional monopolization (aka market failure), and state and federal regulatory capture (aka corruption). That the telecom industry employs an entire cottage industry of think tankers, consultants, and policy wonks to pretend this isn't true doesn't change reality.But notice when regulators, politicians, and many news outlets discuss the problem, it's usually framed in this nebulous, causation free way. About 90% of the time, the problem is dubbed the "digital divide." But the cause of this broadband divide is always left utterly nebulous and causation free. It's almost pathological. Seriously, look at any news story about the "digital divide" in the last three months and try to find one that clearly points out that the direct cause of the problem is regional telecom monopolies and the corruption that protects them. You won't find it.This phenomenon again showed up this week in a CNET interview with Jessica Rosenworcel, who appears to be the top candidate in the Biden Administration's glacial pursuit of a permanent FCC boss. In the article, CNET talks repeatedly about the U.S. broadband problem without once mentioning that telecom monopolies exist, and are the primary reason U.S. broadband is painfully mediocre:
|
![]() |
by Nirit Weiss-Blatt on (#5Q781)
When the New York Times reported Facebook’s plan to improve its reputation, the fact that the initiative was called “Project Amplify” wasn’t a surprise. “Amplification” is at the core of the Facebook brand, and “amplify the good” is a central concept in its PR playbook.Amplify the goodMark Zuckerberg initiated this talking point in 2018. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad,” he said after the Russian election meddling and the killings in Myanmar.Then, other Facebook executives adopted this notion regardless of the issue at hand. The best example is Adam Mosseri, Head of Instagram.In July 2019, addressing online bullying, Mosseri said: “Technology isn’t inherently good or bad in the first place …. And social media, as a type of technology, is often an amplifier. It’s on us to make sure we’re amplifying the good and not amplifying the bad.”In January 2021, After January 6 Capitol attack, Mosseri said: “Social media isn’t good or bad, like any technology, it just is. But social media is specifically a great amplifier. It can amplify good and bad. It’s our responsibility to make sure that we amplify more good and less bad.”In September 2021, after a week of exposés about Facebook by the WSJ, The Facebook Files, Mosseri was assigned to defend the company once again. “When you connect people, whether it’s online or offline, good things can happen and bad things can happen,” he said in his opening statement. “I think that what is important is that the industry as a whole tries to understand both those positive and negative outcomes, and do all they can to magnify the positive and to identify and address the negative outcomes.”Mosseri clearly uses the same messaging document, but Facebook’s PR template contains more talking points. Facebook also asserts that there have always been bad people or behaviors, and the current connectivity simply makes them more visible.A mirror for the uglyAccording to the “visibility” narrative, tech platforms simply reflect the beauty and ugliness in the world. Thus, social media is sometimes a cesspool because humanity is sometimes a cesspool.Mark Zuckerberg addresses this issue several times, with the main message that it is just human nature. Nick Clegg, VP of Global Affairs and Communications, repeatedly shared the same mindset. “When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society,” he wrote in 2020. “With more than 3 billion people using Facebook’s apps every month, everything that is good, bad misogynist and ugly in our societies will find expression on our platform.” “Social media broadly, and messaging apps and technology, are a reflection of humanity,” Adam Mosseri repeated. “We communicated offline, and all of a sudden, now we’re also communicating online. Because we’re communicating online, we can see some of the ugly things we missed before. Some of the great and wonderful things, too.”This “mirror of society” statement is being criticized for being intentionally uncomplicated. Because the ability to shape, not merely reflect, people’s preferences and behavior is also how Facebook makes money. Therefore, despite Facebook’s recurring statements, it is accused of not reflecting but increasing the bad and ugly.Amplify the bad“These platforms aren’t simply pointing out the existence of these dark corners of humanity,” John Paczkowski from BuzzFeed News, told me. “They are amplifying them and broadcasting them. That is different.”After an accumulation of deadly events, such as the Christchurch shooting, Kara Swisher wrote about amplified hate and “murderous intent that leaps off the screen and into real life.” She argued that “While this kind of hate has indeed littered the annals of human history since its beginnings, technology has amplified it in a way that has been truly destructive.”It is believed that bad behavior (e.g., disinformation) is induced by the way that tech platforms are designed to maximize engagement. Thus, Facebook’s victim-centric approach refuses to acknowledge that perhaps bad actors don’t misuse its platform but rather use it as intended (“machine for virality”).Ev Williams, the co-founder of Blogger, Twitter, and Medium, said he now believes that he had failed to appreciate the risks of putting such powerful tools in users’ hands with minimal oversight. “One of the things we’ve seen in the past few years is that technology doesn’t just accelerate and amplify human behavior,” he wrote. “It creates feedback loops that can fundamentally change the nature of how people interact and societies move (in ways that probably none of us predicted).”So, things had turned toxic in ways that tech founders didn’t predict. Should they have foreseen them? According to Mark Zuckerberg, an era of tech optimism led to unintended consequences. “For the first decade, we really focused on all the good that connecting people brings … But it’s clear now that we didn’t do enough,” he said After the Cambridge Analytica scandal. He admitted they didn’t think through “how people could use these tools to do harm as well.” Several years after the Techlash coverage began, there’s a consensus that they needed to “do more” to purposefully deny the ability to abuse them.One of the reasons it was (and still is) a challenging task is their scale. According to this theme, the growth-at-all-cost “blinded” them, and they turned so big to be successfully managed at all. Due to their bigness, they are always in a game of cat-and-mouse with bad actors. “When you have hundreds of millions of users, it is impossible to keep track of all the ways they are using and abusing your systems,” Casey Newton, from the Platformer newsletter, explained in an interview. “They are always playing catch-up with their own messes.”Due to the unprecedented scale at which Facebook operates, it is dependent on algorithms. Then, it claims that any perceived errors result from “algorithms that need tweaking” or “artificial intelligence that needs more training data.” But is it just an automation issue? It depends on who you ask.The algorithms’ fault vs. the people who build them or use themCritics say that machines are only as good as the rules built into them. “Google, Twitter, and Facebook have all regularly shifted the blame to algorithms, but companies write the algorithms, making them responsible for what they churn out.”But platforms tend to avoid this responsibility. When ProPublica revealed that Facebook’s algorithms allowed advertisers to target users interested in “How to burn Jews” or “History of why Jews ruin the world,” Facebook’s response was: The anti-Semitic categories were created by an algorithm rather than by people.At the same time, Facebook‘s Nick Clegg argued that human agency should not be removed from the equation. In a post titled “You and the Algorithm: It takes two to Tango,” he criticized the dystopian depictions of their algorithms, in which “people are portrayed as powerless victims, robbed of their free will.” As if “Humans have become the playthings of manipulative algorithmic systems.”“Consider, for example, the presence of bad and polarizing content on private messaging apps - iMessage, Signal, Telegram, WhatsApp - used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way,” Clegg wrote. “In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”Fixing the machine vs. the underlying societal problemsNonetheless, there are various attempts to fix the “broken machine,” and some potential fixes are discussed more often. One of the loudest calls is for tougher regulation – legislation should be passed to implement reforms. Yet, many remain pessimistic about the prospects for policy rules and oversight because regulators tend not to keep pace with tech developments. Also, there’s no silver-bullet solution, and most of the recent proposals are overly simplistic.“Fixing Silicon Valley’s problems requires a scalpel, not an axe,” said Dylan Byers. However, tech platforms are faced with a new ecosystem of opposition, including Democrats and Republicans, antitrust theorists, privacy advocates, and European regulators. They all carry axes.For instance, there are many new proposals to amend Section 230 of the Communications Decency Act. But, as Casey Newton noted, “it won’t fix our politics, or our broken media, or our online discourse, and it’s disingenuous for politicians to suggest that it would.”When self-regulation is proposed, there is an inherent commercial conflict since platforms are in the business of making money for their shareholders. Facebook only acted after problems escalated and caused real damage. For example, only after the mob violence in India (another problem that existed before WhatsApp, and may have been amplified by the app) the company instituted rules to limit WhatsApp’s ‘virality.’” Other algorithms have been altered in order to eliminate conspiracy theories and their groups from being highly recommended.Restoring more human control requires different remedies: from decentralization projects, which seek to shift the ownership of personal data away from Big Tech and back toward users, to media literacy, which seek to formally educate people of all ages about the way tech systems function, as well as encourage appropriate, healthy uses.The proposed solutions could certainly be helpful, and they all should be pursued. Unfortunately, they are unlikely to be adequate. We will probably have an easier time fixing algorithms, or the design of our technology than we will have fixing society, and humanity has to deal with humanity’s problems.Techdirt’s Mike Masnick recently addressed the underlying societal problems that need fixing. “What we see - what Facebook and other social media have exposed – is often the consequences of huge societal failings.” He mentioned various problems with education, social safety nets, healthcare (especially mental healthcare), income inequality and corruption. Masnick concluded we should be trying to come up with better solutions for those issues rather than “insisting that Facebook can make it all go away if only they had a better algorithm or better employees.”We saw that with COVID-19 disinformation. After President Joe Biden blamed Facebook for “killing people,” and Facebook responded by saying they are “helping save lives,” I argued that this dichotomous debate sucks. Charlie Warzel called it (on his Galaxy Brian newsletter) “an unproductive, false binary of a conversation,” and he is absolutely right. Complex issues deserve far more nuance.I can’t think of a more complex issue than tech platforms’ impact on society, in general, and Facebook’s impact in particular. However, we seem to be stuck between the storylines discussed above, of “amplifying the good vs. the bad.” It is as if you can only think favorably or negatively about “the machine,” and you must pick a side and adhere to its intensified narrative.Keeping to a single narrative can escalate rhetoric and create an insufficient discussion, as evidenced by a recent Mother Jones article. The “Why Facebook won’t stop pushing propaganda” piece describes how a woman tried to become Montevallo’s first black mayor and lost. Montevallo is a very small town in Alabama (7,000 people), whose population is two-thirds white. Her race loss was blamed on Facebook: The rampart of misinformation and rumors about her affected the voting.While we can’t know what got people to vote one way or another, we should consider that racism was prevalent in places like Alabama for a long time. Facebook was the candidate's primary tool for her campaign, highlighting the good things about her historic nomination. Then, racism was amplified in Facebook’s local groups. In the article, the fault was centered on the algorithm amplification, on Facebook's “amplification of the bad.” Facebook’s argument that it only “reflects the ugly” does not hold true here if it makes it more robust. Yet, the root cause in this case remains the same, racism. Facebook “doing better” and amending its algorithms will not be enough unless we also address the source of the problem. WE can and should “do better,” as well.Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication
|
![]() |
by Timothy Geigner on (#5Q6XV)
We've talked far too many times about how the DMCA takedown processes across internet industries as they stand are wide, wide open for abuse. From churches wielding copyright to attempt to silence critics engaging in protected speech, to lawyers using copyright to try to silence critics engaging in protected speech, to freaking political candidates abusing YouTube's DMCA notice process to silence critics engaging in protected speech... well, you get the idea. The point is that we've known for a long, long time that the current method by which the country and companies currently enforce copyright law tilts so heavily towards the accuser that it's an obvious avenue for misuse.And this is an issue created by bad actors big and small. Hell, apparently you cannot even criticique a sophomoric prank joke troop on YouTube without being targeted using copyright law.
|
![]() |
by Tim Cushing on (#5Q6RE)
The Los Angeles Police Department has spent years compiling a "gang database." The term "compile" is used loosely, because the LAPD decides people are gang members just because they know gang members, or are related to them, or live in the same buildings, or work near them, or pass through gang-controlled neighborhoods, or go to school with gang members, or just (as non-gang people are wont to do) wear clothes, shoes, and hats. It's ridiculous.And when that's not "inclusive" enough, LAPD officers fake it. LAPD officers have falsified records to justify unjustifiable stops and searches, something that ultimately resulted in criminal charges against three officers. But even with this wealth of bogus and barely supported information, the gang database (CALGANG) still has one glaring omission: the Los Angeles Sheriff's Department.
|
![]() |
by Tomiwa Ilori on (#5Q6MK)
There has been a lot of focus on moderation as carried out by platforms—the rules social media companies base their decision on what content remains online. There has however been limited attention on how actors other than social media platforms, in this case governments, seek to regulate these platforms.Focusing more on African governments, they carry out this regulation primarily through laws. These laws can be broadly divided into two: direct and indirect regulatory laws. The direct regulatory laws can be seen in countries like Ethiopia and more recently in Nigeria. They are similar to Germany’s Network Enforcement Act and France’s Online Hate Speech Law that directly place responsibilities on platforms and require them to remove online hate speech within a specific time and failure of which attracts heavy sanctions.Section 8 of Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation 1185/2020 provides for various responsibilities for social media platforms and actors. These responsibilities include the suppression and prevention of disinformation and hate speech content by social media platforms and a twenty-four window within which such content must be removed from their platforms. It also provides that they should bring their policies in line with the first two responsibilities.The Proclamation further vests the reporting and public awareness responsibilities on the compliance of social media platforms in the Ethiopian Broadcasting Authority—a body empowered by law to regulate broadcasting services. The Ethiopian Human Rights Commission (EHRC), Ethiopia’s National Human Rights Institution (NHRI), also has responsibilities on public awareness. But it is the Council of Ministers that’s responsible for implementing laws in Ethiopia that may give further guidance on the responsibilities of social media platforms and other private actors.In Nigeria, the legislative proposal, the Protection from Internet Falsehoods, Manipulation and Other Related Matters bill, is yet to become law. The bill seeks to regulate disinformation and coordinated inauthentic behaviour online. The law is similar to that of Singapore which has been criticised by the current United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression for the threats it poses to online expression and online rights in general.Major criticisms against these laws include how they are opaque and pose threats to online expression. For example, the Ethiopian law defines hate speech broadly and does not include the contextual factors that must be considered in categorising online speech as hateful. With respect to the Nigerian bill, there are no clear oversight, accountability or transparency systems in place to check the government's unlimited powers to decide what constitutes disinformation.The indirect regulatory laws are those used by governments through their telecommunications regulatory agencies to compel Internet Service Providers (ISPs) to block social media platforms. This type of regulation requires ISPs to block social media platforms based on public emergencies or national interests. What constitutes these emergencies or interests are vague and in many instances are examples of voices or platforms critical of government policies.In January 2021, the Ugandan government ordered ISPs to block Facebook, Twitter, WhatsApp, Signal and Fiber. The order was issued through the communications regulator. The order came a day after Facebook’s announcement that it will close pro-government accounts sharing disinformation.In June 2021, the Nigerian government ordered ISPs to block access to Twitter stating that the latter’s activities constituted threats to Nigeria’s corporate existence. However, there have been contrary views that the order was as a result of both remote and immediate causes. The remote cause was the role Twitter played in connecting and rallying publics during the #EndSARS protests against police brutality while the immediate cause was attributed to Twitter’s deletion of President Muhammadu Buhari’s tweet which referred to the country’s civil war, contained veiled threats of violence, and violated Twitter’s abusive policies.In May 2021, Ethiopia had just lifted the block on social media platforms in six locations in the country. Routine shutdowns like these have become a thing for African governments and this often occurs during elections or a major political development.On a closer look, the cross-cutting challenge posed by both forms of regulation is the lack of accountability and transparency especially on the part of governments on how they enforce these provisions. Social media platforms are also complicit as there is little or no information on the nature of pressure they face from these government actors.Alongside the mainstream debates on how to govern social media platforms, it is time to also consider wider forms of regulation especially on how they manifest outside Western systems and the threats such regulation poses to online expression.One solution that has been suggested but also severely criticised is the application of international human rights standards to social media regulation. This standard has been argued to be the most preferred because of its customary application across contexts. However, its biggest strength also seems to be its biggest weakness—how does this standard apply in local contexts given the complexity of governing online speech and the myriad of actors involved?In order to work towards effective solutions, we will need to re-imagine and re-purpose traditional governance roles of not only governments and social media platforms, but also ISPs, civil society, and NHRIs. For example, the unchecked powers of most governments to determine what constitutes online harms must be revisited to ensure that there are judicial reviews and human rights impact assessments (HRIAs) of proposed government social media bans.ISPs must also be encouraged to jump into the fray, choose human rights, and not to roll over each time governments make such problematic demands to block social media platforms. For example, they should begin to join other actors like the civil society and the academia to lobby for laws and policies that make judicial reviews and HRIAs requirements before entertaining governments request for blocking of platforms or even content.The application of international human rights standards to social media regulation is not where the work stops, but is where it begins. For a start, proximate actors involved in social media regulation like governments, social media platforms, private actors, local and international civil society bodies, and treaty-making bodies like the United Nations and the African Union, NHRIs must come up with a typology of harms as well as actors actively involved in such regulation. In order to ensure that these addresses the challenges posed by these kinds of regulation, the responsibilities of such actors must be anchored on international human rights standards but in such a way that these actors actively communicate and collaborate.Tomiwa Ilori is currently a Doctoral Researcher at the Centre for Human Rights, Faculty of Law, University of Pretoria. He also works as a Researcher for the Expression, Information and Digital Rights Unit of the Centre.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
|
![]() |
by Leigh Beadon on (#5Q6J8)
As we recently announced, we're celebrating 300 episodes of the Techdirt Podcast with a special live-streamed episode today, an hour from now, at 1pm PT/4pm ET. Original co-hosts Dennis Yang and Hersh Reddy are returning to join Mike for this discussion, and we're also (barring technical issues) allowing our Patreon backers to call in live with questions!Watch the live stream on YouTube »If you're not yet a backer but would like to call in, there's still time! Just back us at any level on Patreon and you'll gain access to a Patron-only post there, which contains the link to watch via our podcast recording platform and use the call-in feature.We're excited to celebrate this milestone with our listeners and supporters, and look forward to seeing you all there!
|
![]() |
by Karl Bode on (#5Q6G7)
For years cable TV has been plagued by retrans feuds and carriage disputes that routinely end with users losing access to TV programming they pay for. Basically, broadcasters will demand a rate hike in new content negotiations, the cable TV provider will balk, and then each side blames the other for failing to strike a new agreement on time like reasonable adults. That repeatedly results in content being blacked out for months, without consumers ever getting a refund. After a few months, the two sides strike a new confidential deal, your bill goes up, and nobody much cares how that impacts the end user. Rinse, wash, repeat.And while the shift to streaming TV has improved a lot about cable TV in general, these annoying feuds have remained. The latest case in point: Comcast NBC Universal is demanding more money from Google for the 14+ channels currently on the company's YouTube TV live streaming platform. Google appears to be balking, resulting in NBC running a bunch of annoying banners on its channels warning about a looming blackout, and directing people to this website blaming Google for not wanting to pay more money for the same content:In a blog post, Google notes that negotiations are ongoing, but suggests that Comcast isn't being reasonable in negotiations:
|
![]() |
by Daily Deal on (#5Q6DH)
The 2021 Google Software Engineering Manager Bundle has 12 courses to help you learn software development. You'll learn about Data Science, Python, C#, Java, and more. Two courses will help you prepare for the CISA and CISM certification exams. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5Q69Z)
Over the last few weeks, the WSJ has run a series of posts generally called "The Facebook Files," which have exposed a variety of internal documents from Facebook that are somewhat embarrassing. I do think some of the reporting is overblown -- and, in rather typical fashion regarding the big news publications and their reporting on Facebook, presents everything in the worst possible light. For example, the report on how internal research showed that Instagram made teen girls feel bad about themselves downplays that the data actually shows a significantly higher percentage of teens indicated that Instagram made them feel better:But, of course, the WSJ's headline presents it very differently:None of this is to say that this is okay, or that Facebook shouldn't be trying to figure out ways to minimize people using the sites being made to feel worse about themselves. But the reporting decisions here do raise some questions.Another one of the articles highlights how Facebook has different rules for different users with regards to content moderation. And, again, on a first pass this sounds really damning:
|
![]() |
by Tim Cushing on (#5Q60B)
Not everyone uses an ad-blocker. But most people do. And no matter how much online publications claim ad blocking is the same thing as stealing, it really isn't. If they're bent out of shape about it, it's because they assault users with ads, burying content behind a wall of uncurated virtual salesmen. If it bleeds, it leads, the old saying goes, but now it refers to readers' processing power and data allotments.Far too many online publications consider processing the check on the ad buy to be the end of their responsibility. But ad servers get hijacked. Other ad companies get purchased by ad pushers with more malleable morals. Everyone collects reams of data on every site visitor. The end user of sites seems to be the last concern for ad brokers and the people who sell to them, so it's no surprise more people are deploying ad blockers, seeing as readers of even supposedly-reputable sites have been hit with malware, spyware, and auto-playing video when just trying to access some content.Ads can be dangerous. They can compromise systems and hijack browsers. The general public definitely knows this. Enjoy this shade thrown at ad saturation and website design overcompensation:
|
![]() |
by Tarah Wheeler on (#5Q5F3)
I was writing something a while ago, and had reason to quote the famous aphorism “Computers are incredibly fast, accurate, and stupid; humans are incredibly slow, inaccurate, and smart.” I’ll bet you’ve heard a variation on that quote before, and probably have seen a meme or two with it. It’s usually attributed to Einstein.(wait, who’s Tom Asacker?)But when you’re writing a research paper, and you need to add the source citation into Zotero for bibliographic and reference management, you need the actual publication title and date, as well as the author. So I went looking.About 25 links and a Wiki-hole later, I stumbled over this article by Ben Shoemate, a web architect and developer who’d come across the same problem I had--in 2008.Ben had sought this quote as well, thinking that with tens of thousands of search results pointing to Einstein as the quote’s author, there must be a source somewhere.Even at NASA’s showcase at the conference called Supercomputing 2006, this quote was attributed to Einstein, and then later fact-checked. No one can find out who said it. The closest anyone had gotten was Ben finding a single page (page 691) of a screenshotted article that seemed to have it.Well, I was sitting in England, in a small room in Oxford to be precise, in lockdown, a short walk from the most extensive library on earth since the Library of Alexandria first smelled smoke. I had plenty of things better to do but a bee in my bonnet, nonetheless. I started trying to track down the source article Ben had mentioned; it was something from the Instrument Society of America. However, while I was a short walk from the Bodleian, the lockdown meant no one could go inside. Oddly, that turned out to be a boon for this little quest. Because most university libraries in the world right now are cooperating with each other to an extraordinary extent, I was able to talk the librarian at the Bod into calling whichever university would have the article I needed--the one surrounding page 691.It turned into a combination of Telephone and Who’s On First.
|
![]() |
by Copia Institute on (#5Q5A6)
Summary: Content moderation questions can come from all sorts of unexpected places — including custom soda bottle labels. Over the years, Coca Cola has experimented with a variety of different promotional efforts regarding more customized cans and bottles, and not without controversy. Back in 2013, as part of its “Share a Coke” campaign, the company offered bottles with common first names on the labels, which angered some who felt left out. In Israel, for example, people noticed that Arabic names were left off the list, although Coca Cola’s Swedish operation said that this decision was made after the local Muslim community asked not to have their names included.This controversy was only the preamble to a bigger one in the summer of 2021, when Coca Cola began its latest version of the “Share a Coke” effort — this time allowing anyone to create a completely custom label up to 36 characters long. Opening up custom labels immediately raised content moderation questions.Some people quickly noticed some surprising terms and phrases that were blocked (such as “Black Lives Matter”) while others that were surprisingly not blocked (like “Nazis”).As CNN reporter Alexis Benveniste noted, it was easy to get offensive terms through the blocks (often with a few tweaks), and there were some eye-opening contrasts:
|
![]() |
by Tim Cushing on (#5Q563)
Just a few days ago, Clearview -- the company that scrapes the web to build a facial recognition database it sells to law enforcement, government agencies around the world, and a number of private parties -- decided to make itself even less likable.It decided to subpoena transparency activists Open The Government, demanding copies of all FOIA requests it had made requesting info about Clearview. It also, more disturbingly, demanded copies of OTG's communications with journalists, clearly indicating it felt First Amendment protections were something it should enjoy, but shouldn't be extended to its critics.It really wasn't a step Clearview needed to take… for several reasons. First of all, Clearview's reputation is pure dog shit at the moment. It's unlikely to improve unless the company pulls the plug on its product and disbands. A move like this only earns it more (deserved) disdain and hatred. What it's not going to do -- even if successful -- is deter future criticism of the business and its scraped-together facial recognition product.Second, OTG was not a party to the lawsuit. Clearview has no right to demand these documents from a non-party, especially communications between it and journalists… journalists who are also not a party to the lawsuit Clearview is facing.Third, if Clearview wanted copies of OTG's FOIA requests, all it had to do is visit OTG's MuckRock page and download all of its publicly accessible requests and responses.There's some good news, though. Shortly after having shot itself in the face, Clearview had second thoughts about the self-inflicted wound it had just sustained. Here's Alexandra Levine for Politico.
|
![]() |
by Christian Dawson on (#5Q51T)
More than ever, the Internet powers much of our daily life. From staying in touch with friends and family to our work, healthcare, banking, and education, we rely on it and we take for granted that it will always be there.But the way that the Internet was built and how it functions were never a fait accompli. An obscure statute—Section 230, a law enacted more than twenty-five years ago—is core to the Internet as we know it today. It’s also frequently misunderstood. In recent years, many critics of Big Tech from across the political spectrum point to Section 230 as the enabling force for a litany of harmful online content and abuses, some of whom contend that its abolition would immediately lead to a better Internet.In reality, Section 230 provides a wide range of non-Big Tech actors, including Internet intermediaries, limited immunity allowing them to operate without worrying about liability stemming from content created by others. This legal protection catalyzes and supports the growth of an amazing, vast array of innovative companies that make the Internet what it is today.There has been ample discussion already of the fundamentals of Section 230, some of it right here in the pages of Techdirt, so it would not be useful for me to go through it all once more. I think it is essential first to clearly identify and describe the key elements of what we collectively call “the Internet,” explaining where and how infrastructure companies fit in. before quickly touching on what Section 230 is and debunking 3 pernicious and persistent myths about it. I will close by giving you 6 real examples of activities that are actually protected by Section 230 and demonstrate why this law is so vital.The Internet and its infrastructureThe Internet as we know it can be broken down into three sectors: the transmission sector, the infrastructure sector, and the content sector.
|
![]() |
by Karl Bode on (#5Q4Z9)
Last year we noted how the calls to ban TikTok didn't make a whole lot of sense. For one thing, a flood of researchers have shown that TikTok is doing all the same things as many other foreign and domestic adtech-linked services we seem intent to...do absolutely nothing about.Secondly, the majority of the most vocal pearl-clutchers over the app (Josh Hawley, etc.) haven't cared a whit about things like consumer privacy or internet security, highlighting how the yearlong TikTok freak out was more about performative politics than policy. The wireless industry SS7 flaw? US cellular location data scandals? The rampant lack of any privacy or security standards in the internet of things? The need for election security funding?Most of the folks who spent last year hyperventilating about TikTok haven't made so much as a peep on these other subjects. Either you actually care about consumer privacy and internet security or you don't, and a huge swath of those hyperventilating about TikTok have been utterly absent from the broader conversation. In fact, many of them have done everything in their power to scuttle any effort to have even modest privacy guidelines for the internet era, and fought every effort to improve and properly fund election security. Again, that's because for many it's more about politics than serious, adult tech policy.After Trump Inc proposed banning TikTok, you'll recall the administration came up with another dumb idea. Basically, they suggested selling ByteDance-owned TikTok to Trump allies over at Oracle and Walmart. It was just glorified cronyism, though for whatever reason a lot of the press and policy circles seriously and meaningfully analyzed the move as if it was anything else. It wasn't, and quickly fell apart like the dumb house of cards it was.At one point Microsoft was tossed around as a potential suitor for TikTok as well. And in conversations this week with Kara Swisher, Microsoft CEO Satya Nadella confirmed the whole TikTok tapdance last year was every bit as stupid as we assumed it was. He's diplomatic about it, but Nadella notes how Trump's public posturing about TikTok wasn't backed by, well, anything:
|
![]() |
by Daily Deal on (#5Q4ZA)
Spreeder is an eReader that uses the RSVP technology or RSVP (rapid serial visual presentation) to allow you to speed read any digital content by reducing eye movement on your iPhone, iPad, Android, Mac, and PC. Easily read at 3 or more times than your normal speed, and save valuable time. Rather than simply giving you the software activities and leaving you on your own (as older programs do), world-leading experts guide you at every step of the way. It’s like having the world’s best speed reading instructors and technology right in the room with you. It's on sale for $39. Use the code VIP40 for an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5Q4S9)
Wisconsin is apparently America's Karen.
|
![]() |
by Eric Goldman on (#5Q4GE)
We've already posted Mike's post about the problems with the SHOP SAFE Act that is getting marked up today, as well as Cathy's lamenting the lack of Congressional concern for what they're damaging, but Prof. Eric Goldman wrote such a thorough and complete breakdown of the problems with the bill that we decided that was worth posting too.[Note: this blog post covers Rep. Nadler’s manager’s amendment for the SHOP SAFE Act, which I think will be the basis of a committee markup hearing today. If Congress were well-functioning, draft bills going into markup would be circulated a reasonable time before the hearing, so that we can properly analyze them on a non-rush basis, and clearly marked as the discussion version so that we’re not confused by which version is actually the current text.]The SHOP SAFE Act seeks to curb harmful counterfeit items sold through online marketplaces. That’s a laudable goal that I expect everyone supports. However, this bill is itself a giant counterfeit. It claims to focus on “counterfeits” that could harm consumer “health and safety,” but those are both lies designed to make the bill seem narrower and more balanced than it actually is.Instead of protecting consumers, this bill gives trademark owners absolute control over online marketplaces by overturning Tiffany v. eBay. It creates a new statutory species of contributory trademark liability that applies to online marketplaces (defined more broadly than you think) selling third-party items that bear counterfeit marks and implicate “health and safety” (defined more broadly than you think), unless the online marketplace operator does the impossible and successfully navigates over a dozen onerous and expensive compliance obligations.Because the bill makes it impossible for online marketplaces to avoid contributory trademark liability, this bill will drive most or all online marketplaces out of the industry. (Another possibility is that Amazon will be the only player able to comply with the law, in which case the law entrenches an insurmountable competitive moat around Amazon’s marketplace). If you want online marketplaces gone, you might view this as a good outcome. For the rest of us, the SHOP SAFE Act will reduce our marketplace choices, and increase our costs, during a pandemic shutdown when online commerce has become even more crucial. In other words, the law will produce outcomes that are the direct opposite of what we want from Congress.In addition to destroying online marketplaces, this bill provides the template for how rightsowners want to reform the DMCA online safe harbor to make it functionally impossible to qualify for as well. In this respect, the SHOP SAFE Act portends how Congress will accelerate the end of the Web 2.0 era of user-generated content.[The rest of this post is 4k+ words explaining what the bill does and why it sucks. You might stop reading here if you don’t want the gory/nerdy details.]Who’s Covered by the BillThe bill defines an “electronic commerce platform” as “any electronically accessed platform that includes publicly interactive features that allow for arranging the sale or purchase of goods, or that enables a person other than an operator of the platform to sell or offer to sell physical goods to consumers located in the United States.”Clearly, the second part of that definition targets Amazon and other major marketplaces, such as eBay, Walmart Marketplace, and Etsy. I presume it also includes print-on-demand vendors that enable users to upload images, such as CafePress, Zazzle, and Redbubble (unless those vendors are considered to be retailers, not online marketplaces).The first part of the definition includes services with “publicly interactive features that allow for arranging the sale or purchase of goods.” This is a bizarre way to describe any online marketplace, and it covers something other than enabling third-party sellers (that’s the second part of the definition), so what services does this describe? Read literally, all advertising “allow[s] for arranging the sale or purchase of goods,” so this law potentially obligates every ad-supported publisher to undertake the content moderation obligations the bill imposes on online marketplaces. That doesn’t make sense, because the bill uses the undefined term “listing” 11 times, and display advertising isn’t normally considered to be a listing. Still, this wording is unusual and broad — and you better believe trademark owners like its breadth. If the bill wasn’t meant to regulate all ads, the bill drafters should make that clear.Like most Internet regulations nowadays, the bill distinguishes entities based on size. See my article with Jess Miers on how legislatures should do that properly. The bill applies to services that have “sales on the platform in the previous calendar year of not less than $500,000.” Some problems with this distinction:
|
![]() |
by Cathy Gellis on (#5Q4AC)
As Congress takes up yet another ill-considered bill to deliberately create more risk of liability for Internet services, it is worth remembering something President Kennedy once said:
|
![]() |
by Timothy Geigner on (#5Q40D)
The manner in which content producers generally, and video game publishers specifically, handle art and content created by their biggest fans varies wildly. There's the Nintendo's of the world, where strict control over all things IP is favored over allowing fans to do much of anything with its properties. Other gaming companies at least allow fans to do some things with their properties, such as making let's play videos and that sort of thing. Still other gaming companies like Square have managed to let fans do some large and amazing projects with its IP.And then there is Chinese gaming studio miHoYo, makers of the hit title Genshin Impact, where the studio doesn't just allow fans to make their own art and merchandise... but also flatout tells them that they can go sell it, too.
|
![]() |
by Mike Masnick on (#5Q3XQ)
Support our crowdfunded paper exploring the NFT phenomenon »Last week we announced that we wanted to write a paper exploring the NFT phenomenon, and specifically what it meant with regards to the economics around scarce and infinitely available goods. To run this crowdfund, we're testing out a cool platform called Mirror that lets us mix crowdfunding and NFTs as part of the process (similarly, we're now experimenting with NFTs with our Plagiarism by Techdirt collection).We were overwhelmed by the support for the paper, which surpassed what we expected. The "podium" feature -- which gave special NFTs to our three biggest backers -- has closed with the winners being declared, but the rest of the crowdfund will remain open until this Thursday evening. We also offered up a special "Protocols, Not Platforms" NFT for the first 15 people who backed us at 1 ETH or above. So far, ten of those have been claimed, but five remain.If anyone is interested in supporting this paper and our work exploring scarcity and abundance, please check it out.
|
![]() |
by Glyn Moody on (#5Q3TQ)
When modern copyright came into existence in 1710, it gave a monopoly to authors for just 14 years, with the option to extend it for another 14. Today, in most parts of the world, copyright term is the life of the creator, plus 70 years. That’s typically over a hundred years. The main rationale for this copyright ratchet – always increasing the duration of the monopoly, never reducing it – is that creators deserve to receive more benefit from their work. Of course, when copyright extends beyond their death, that argument is pretty ridiculous, since they don’t receive any benefit personally.But the real scandal is not so much that creators’ grandchildren gain these windfalls – arguably something that grandpa and grandma might approve of. It’s that most of the benefit of copyright goes to the companies that creative people need to work with – the publishers, recording companies, film studios, etc.One of the cleverest moves by the copyright industry was to claim to speak for the very people it exploits must brutally. This allows its lobbyists to paint a refusal to extend copyright, or to make its enforcement harsher, as spitting in the face of struggling artists. It’s a hard argument to counter, unless you know the facts: that few creators can make a living from copyright income alone. Meanwhile, copyright companies prosper mightily: some publishers enjoy 40% profit margins thanks to the creativity of others.By claiming to represent artists, copyright companies can also justify setting up costly new organisations that will supposedly channel more money to creators. In fact, as later blog posts will reveal, collecting societies have a record of spending the money they receive on fat salaries and outrageous perks for the people who run them. In the end, very little goes to the artists they are supposed to serve.EurActiv has a report about an interesting new copyright organization:
|
![]() |
by Leigh Beadon on (#5Q3PM)
Disinformation continues to be a major topic of discussion across many fields, but a lot of what people believe about the subject is... questionable at best. One of the more thoughtful writers on the subject is Joe Bernstein from Buzzfeed News, whose recent cover story in Harper's brings a very different and valuable perspective to the debate. This week, he joins us on the podcast to discuss the glut of misconceptions and misinformation about disinformation.Additionally, as we recently announced, we'll be celebrating our upcoming 300th episode of the podcast with a live stream featuring the return of the original co-hosts Dennis Yang and Hersh Reddy, including (hopefully, barring technical issues) the ability for viewers who back our Patreon to call in live and ask questions. The stream will happen on Thursday, September 30th at 1pm PT/4pm ET — stay tuned for more details on how you can watch the stream, and be sure to back our Patreon if you want a chance to call in!Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Austin Ruckstuhl on (#5Q3J1)
Content moderation is a can of worms. For Internet infrastructure intermediaries, it’s a can of worms that they are particularly poorly positioned to tackle. And yet Internet infrastructure elements are increasingly being called on to moderate content—content they may have very little insight into as it passes through their systems.The vast majority of all content moderation happens on the “top” layer of the internet—such as social media and websites, places online that are the most visible to an average user. Platforms and applications moderate the content that gets posted on their platforms every day. If a post violates a platform’s terms of service, the post is usually blocked or taken down. If a user continues to post content that violates a platform’s terms, then the user’s account is often suspended. These types of content moderation practices are increasingly understood by average Internet users.Less often discussed or understood are the types of services facilitated via actors in the Internet ecosystem that both support and exist under the upper content layers of the Internet.Many of these companies host content, supply cloud services, register domain names, provide web security, and many more features of what could be described as the plumbing services of the Internet. But instead of water and sewage, the Internet deals in digital information. In theory, these “infrastructure intermediaries” could moderate content, but for reasons of convention, legitimacy, and practicality they don’t usually do it on purpose.However, some notable recent exemptions may be setting precedent.Amazon Web Services removed Wikileaks from their system in 2010. Cloudflare kicked off the Daily Stormer. An Italian court ordered Cloudflare to remove a copyright infringing site. Amazon suspended hosting for Parler.What does all this mean? Infrastructure may have the means to perform “content moderation,” but it is critical to consider the effects of this trend to prevent harming the Internet’s underlying architecture.In principle, Internet service providers, registries, cloud providers and other infrastructure intermediaries should be agnostic to the content which passes over their systems. Their business models have nothing to do with whether one is sending text, audio or video. Instead, they are meant to act as neutral intermediaries, providing a reliable service. In a sense, they operate the plumbing system that delivers the water. While we might engage a plumber to inspect and repair our pumps, do we feel comfortable relying on the plumber to check the quality of the water every minute of every day? Should the plumber be able to shut off water access indefinitely with no oversight?Despite this, big companies have made decisions to moderate content that is clearly out of their scope as infrastructure intermediaries. It begs the question: why? Were these actions to uphold some sort of moral authority or primarily on the business grounds of public perception? How comfortable are we with these types of companies “regulating” content in the absence of—or even at the behest of—governmental regulation?If these companies add content moderation to their responsibilities, it takes away the time and resources they can dedicate to security, reliability, and new features, some of which may even help fight reasons for wanting to moderate content. And while large companies may have the means, it adds an additional role outside of their original purview or mission that would be costly or unattainable for most startups or smaller companies.As pressure mounts from public opinion, regulators, and courts, we should recognize what is happening and properly understand where problems can be best addressed and what problems we don’t know enough about to warrant messing with the plumbing of the Internet just yet. Moreover, we should be wary of any regulation which may turn to infrastructure intermediaries explicitly to moderate content.Asking an infrastructure intermediary to moderate content would be like asking the waiter to cook the meal, the pilot to repair the plane, or the police officer to serve as the judge. Even if it were possible, we must ask whether it is truly an acceptable approach.The Internet is often referred to as a layered architecture because it is comprised of different types of infrastructure and computer entities. Expecting them to each moderate content indiscriminately would be problematic. Who would they be accountable to?A core idea often proposed is that content moderation should occur at the highest available layer, closest to the user. Some even argue that content moderation below this, in the realm of infrastructure, is more problematic because these companies cannot easily moderate a single content item. Infrastructure needs to work at scale, and moderating a single piece of content may mean effectively turning off a water main to fix a dripping faucet. That is, infrastructure companies often have to paint with a broader brush by removing an entire user or an entity’s access to their service.These broad strokes of moderation are often deep and wide in their effect, and critics argue they go too far. Losing access to a system is clearly more final than having a single item removed from a system.Georgia Evans summarized the problem well, saying “the deeper into the stack a company is situated, the less precise and more extreme their actions against harmful content are.” For this reason, Corinne Cath refers to them as reluctant sheriffs and political gatekeepers. These are important complexities which must be woven into any understanding of deep-layer moderation by Internet infrastructure companies and policymakers.The tech community and policy makers must ensure that no policy proposals unintentionally require the plumber’s legal role to include quality assurance and access determination. In the realm of the Internet, certain actors have certain functions and things work in a modular, interoperable way by design. The beauty of the Internet is that no one company or entity must “do it all” to achieve a better Internet. But, we must also ensure that new demands for additional functionality—e.g., moderation—are situated at the right layer and target the party with the expertise and role most likely to do a careful job.Policymakers must consider the unintended impacts of content moderation proposals on infrastructure intermediaries. Legislating without due diligence to understand the impact on the unique role of these intermediaries could be detrimental to the success of the Internet, and an increasing portion of the global economy that relies on Internet infrastructure for daily life and work.Conducting impact assessments prior to regulation is one way to mitigate the risks. The Internet Society created the Internet Impact Assessment Toolkit to help policymakers and communities assess the implications of change—whether those are policy interventions or new technologies.Policy changes that impact the different layers of the Internet are inevitable. But we must all ensure that these policies are well crafted and properly scoped to keep the Internet working and successful for everyone.Austin Ruckstuhl is a Project & Policy Advisor at the Internet Society where he works on Internet impact assessments, defending encryption and supporting Community Networks as access solutions.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.
|
![]() |
by Tim Cushing on (#5Q3F8)
Being consistent is hard. Just ask John Stossel, libertarian news commentator and self-proclaimed supporter of free markets and deregulation.Here's John touting the power of free markets to route around perceived "censorship" by platforms engaging in moderation:
|
![]() |
by Daily Deal on (#5Q3F9)
The Complete NFT And Cryptocurrency Masterclass Bundle has 6 courses to help you learn all you need to know to create your own NFTS and how to trade cryptocurrency. You'll gain a strong understanding of the NFT world and how they work. You'll also learn some of the most popular methods that you can use to start earning passive income from cryptocurrency. It's on sale for $30. Use the code VIP40 for an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Leigh Beadon on (#5Q3BX)
As you may know, we’re fast approaching our 300th episode of the podcast, and to celebrate we’re bringing back the original co-hosts, Dennis Yang and Hersh Reddy, to join Mike Masnick for a special live-streamed episode this Thursday, September 30th at 1pm PT / 4pm ET.Stay tuned on Thursday morning when we’ll be sharing a link to the YouTube live stream here on the blog and on Twitter. But, for our backers on Patreon, we’re also testing out a new feature that will allow you to call in live and talk to the hosts! If you're already a backer, you can find the link to join via our recording and call-in platform on Patreon in your message inbox and in a backers-only post on our page. If not, now's the time to become a patron and get access to this and other special bonuses! This is the first time we’ve experimented with this feature, so we’re anticipating the possibility of technical issues that prevent it from working — but if all goes well, we’re excited to field your questions and celebrate 300 episodes of The Techdirt Podcast!
|
![]() |
by Mike Masnick on (#5Q3BY)
'Tis the season for terrible, horrible, no good bills to destroy the open internet. First up, we've got Rep. Jerry Nadler, a close friend of the always anti-internet lobbying force that is the legacy copyright industries. Earlier this year he introduced the SHOP SAFE Act, which is due for a markup tomorrow, and has an unfortunately high likelihood of passing out of committee. The principle behind the Act (which Nadler has now updated with a manager's amendment) is that "something must be done" about people buying counterfeit goods online.Not addressed, at all, is whether or not counterfeit goods online are actually a real problem. I know that industry folks always insist that counterfeiting is a scourge that costs billions, but actual research on this shows something different entirely. A GAO report from years back showed that most of the stats regarding counterfeiting are completely exaggerated and multiples studies have shown that -- far from "tricking" people -- most people who buy counterfeits know exactly what they're doing, and that for many buyers, buying a counterfeit is an aspirational purchase. That is, they know they're not buying the real thing, but they're buying the counterfeit because that's what they can afford -- and if they can afford the real thing at a later date, they will buy it. But nearly all of the public commentary on counterfeiting assumes that the public is clueless, and being "tricked" into buying "dangerous" counterfeits.The second bad premise behind the SHOP SAFE Act is that the "real problem" is Section 230 (because everyone wants to assume that Section 230 can be blamed for anything bad online). So the core approach of the SHOP SAFE Act is to add liability to websites that allow people to sell stuff online. However, as EFF notes in its write up about the problems with this bill, if you try to sell something via Craigslist or even just via Gmail, the bill would effectively make those companies liable for your sale.
|
![]() |
by Karl Bode on (#5Q333)
While Apple may be attempting to make being marginally competent at privacy a marketing advantage in recent years, that hasn't always gone particularly smoothly. Case in point: the company's new "ask app not to track" button included in iOS 14.5 is supposed to provide iOS users with some protection from apps that get a little too aggressive in hoovering up your usage, location, and other data. In short, the button functions as a more obvious opt out mechanism that's supposed to let you avoid the tangled web of privacy abuses that is the adtech behavioral ad ecosystem.But of course it's not working out all that well in practice, at least so far. A new study by the Washington Post and software maker Lockdown indicates that many app makers are just...ignoring the request entirely. In reality, Apple's function doesn't really do all that much, simply blocking app makers from accessing one bit of data: your phone's ID for Advertisers, or IDFA. But most apps have continued to track a wide swath of other usage and location data, and the overall impact on user privacy has proven to be negligible:
|
![]() |
by Tim Cushing on (#5Q2TP)
The Department of Justice is the nominal leader of US law enforcement, even if it really only has direct control of federal officers. That being said, it would have been nice to see the DOJ take the lead on law enforcement issues, rather than gently coast into the police reform driveway late in the proverbial night to add itself to the bottom of the list of reform efforts springing up all over the nation in response to, you guessed it, violence committed by police officers.Chokeholds have been controversial for forever, but even more so in recent years, as police officers across the nation have killed people they were just supposed to be arresting, using techniques most police departments claim (often after the fact) they've banned for years. The DOJ has never banned chokeholds previously, and it's apparently not going to start now.The new guidance [PDF] doesn't seem like much of an improvement over the old guidance, which was released more than 17 years ago. The old one said that the DOJ has had a "long-standing policy" that limits use of deadly force to situations where officers have a "reasonable belief" the arrestee "poses an imminent danger of death or serious physical injury to the officer or to another person." This is the same standard that governs almost all use of force by officers all over the nation and it really hasn't stopped them from deploying deadly force unreasonably in situations that could have benefitted from de-escalation and restraint.The revamped guidance doesn't change much, if anything, about the threat calculus officers must perform before deciding to kill someone by choking them to death.
|
![]() |
by Timothy Geigner on (#5Q2ET)
It's no secret that we haven't been huge fans of the termination rights that exist in current copyright law. Not because we don't want original artists to be able to profit from their own work, of course. Rather, the problems are that copyright is already simply too long, which makes the termination issue far too often not about artists themselves profiting from their work, but rather about their families doing so. Add to that the more salient issue that these termination rights tend to be mostly useful for creating massive messes and disputes between parties over the validity of termination requests and the fact is that this stuff gets really icky really fast.But, the current reality is that termination rights in the law exist, so there is no reason why creators shouldn't use that part of the law. You may recall that a decade ago Marvel was hit by a series of termination requests for copyrights on all kinds of superhero stories and characters by Jack Kirby. Kirby's estate lost in court every step of the way up to the Supreme Court, with Marvel arguing that all of Kirby's work was work for hire, but Marvel and the estate reached a settlement before SCOTUS could take up the case. For termination requests for work that occurred prior to the Copyright Act of 1976 coming into force, how those requests should be ruled upon is still an open question.But perhaps we have another shot at getting clarity on this and, what do you know, it concerns Marvel yet again. Another creator has petitioned for termination on some specific copyrights around Spider-Man and Doctor Strange.
|
![]() |
by Karl Bode on (#5Q2AC)
There's been little doubt that the streaming TV revolution has been a decidedly good thing. Competition from streaming has resulted in more options, for less money, and greater programming flexibility than ever before. Streaming customer satisfaction is consistently higher than traditional cable TV as a result, and lumbering giants that fought against evolution for years (at times denying that cord cutting even existing) have been forced to actually try a little harder if they want to retain TV subscribers.Of course the more things change, the more they stay the same. And a lot of the problems that plagued the traditional TV experience have made their way to streaming. For example, since broadcasters (which were primarily responsible for the unsustainable cost of traditional cable TV) must have their pound of flesh to satiate investor needs for quarterly returns, price hikes in live streaming service have been arriving fast and furiously. And the more the industry attempts to innovate, the more it finds itself retreading fairly familiar territory.Case in point: to lure more users to its platforms and streaming hardware, Google is in talks with multiple companies to offer users free streaming TV channels, complete with ads:
|
![]() |
by Tim Cushing on (#5Q26H)
The police in Minneapolis are giving the public what they think the public wants: fewer police officers, fewers interactions with police, and, of course, MOAR CRIME. Calls to defund the police began following the murder of George Floyd by police officer Derek Chauvin. Law enforcement officers expressed disdain (rather than dismay their actions had provoked this), asking rhetorically who would show up to tell people there isn't much officers can (or will!) do in response to reported crimes.The disingenuous interpretation provided by most police departments was "Fuck 'em." Let the city fall into criminal chaos if residents continued to express their opposition to excessive force and rights violations. The application of the "defund the police" mentality by the Minneapolis PD is every bit as disingenuous as cop supporters' interpretations of "defund the police" movements -- ones that generally only want to move resources being used poorly by police departments to other entities more suited to handling common calls, like people suffering from suicidal thoughts or mental breakdowns.Because the cops can't be honest about their own contribution to the current state of affairs in Minneapolis, they're giving residents the part that's easiest to do (fewer cops handling fewer crimes) without doing the difficult part (relinquishing their paychecks). An investigation by Reuters reporter Brad Heath shows cops are doing less cop stuff in the Twin Cities while still collecting the same salaries they always have.
|
![]() |
by Will Duffield on (#5Q224)
In August, porn-subscription platform OnlyFans announced that it would no longer permit pornography, blaming pressure from banks. The porn policy was rescinded after a backlash from platform users, but the incident illustrates how a handful of heavily regulated financial service providers can act as meta-moderators by shaping the content policies of platforms that rely on them.How did banks acquire such power over OnlyFans? Although people sometimes express themselves for free, they usually demand compensation. Polemicists, scientists, poets, and, yes, pornographers all need a paying audience to put food on their tables. Unless the audience is paying in cash, their money must move through payment processors, banks, and other financial intermediaries. If no payment is processed, no performance will be forthcoming.OnlyFans relies on financial intermediaries in several ways. It must be able to accept payments from users, send payments to content creators, and raise capital from investors. Each of these activities requires the services of a bank or payment processor. In an interview with the Financial Times, OnlyFans CEO Tim Stockey pointed to banks’ refusals to process payments to content creators as the pressure behind the proposed policy change.“We pay over one million creators over $300m every month, and making sure that these funds get to creators involves using the banking sector,” he said, singling out Bank of New York Mellon as having “flagged and rejected” every wire connected to the company, “making it difficult to pay our creators.”BNY Mellon processes a trillion dollars of transfers a day. At this scale, OnlyFans’ $300 million a month in creator payments could be lost in a rounding error. Like individual users on massive social media platforms, the patronage of any one website or business doesn’t matter to financial intermediaries. Banks often refuse service to the sex industry because of its association with illegal prostitution. In the face of bad press or potential regulatory scrutiny, it is usually easier, and in the long run, cheaper, to simply sever ties with the offending business.This leaves an excluded firm like OnlyFans with few options. OnlyFans cannot simply become a payment processor. Financial intermediaries are heavily regulated. OnlyFans is unlikely to clear the regulatory hurdles, and even if it could, compliance with anti-money laundering laws would strip its users of anonymity.Financial intermediaries are uniquely positioned to police speech because they are heavily regulated. While Section 230 keeps the costs of starting a speech platform low, banking regulation makes it difficult and expensive to enter the financial services market. There are hundreds of domain registrars, but only a handful of major payment processors. This disparity makes the denial of payment processing one of the most effective levers for controlling speech.Banks have the same rights of conscience as other firms, but regulation gives their decisions added weight. Financial intermediaries are in the business of making money, not curating for a particular audience, so they have less incentive to moderate than publishers. However, when financial intermediaries moderate, regulation prevents alternative service providers from entering the market.Peer-to-peer payment systems, such as cryptocurrency, offer a solution that circumvents intermediaries entirely. However, cryptocurrency has proven difficult to use as money at scale. OnlyFans was able to grow to its current size through access to the traditional banking system. At this stage, it cannot easily abandon it. OnlyFans would lose many users if it required buyers and sellers to maintain cryptocurrency wallets. The platform’s current investors would likely balk at issuing a token to raise additional capital. Decentralized alternatives are, for the moment, unworkably convoluted.While financial intermediaries’ power to moderate is not absolute, they can keep unwanted speech at the fringes of society and prevent it from being very profitable. This is not merely a problem for porn. Many sorts of legal but disfavored speech are vulnerable to financial deplatforming. Gab, a social media platform popular with the alt-right, has been barred from PayPal, Venmo, Square, and Stripe. It eventually found a home with Second Amendment Processing, an alternative payment processor originally created to serve gun stores.Commercial banks have faced pressure to cease serving gun stores from both activists and the government in Operation Choke Point. Operation Choke Point sought to discourage banks from serving porn actors, payday lenders, gun merchants, and a host of other “risky” customers. The FDIC threatened banks with “unsatisfactory Community Reinvestment Act ratings, compliance rating downgrades, restitution to consumers, and the pursuit of civil money penalties,” if they failed to follow the government’s risk guidance. Operation Choke Point officially ended in 2017, but it set the tone for banks’ treatment of covered businesses. Because the banking sector is highly regulated, it is very susceptible to informal government pressure—regulators have many ways to interfere with disobedient banks.In 2011, when Wikileaks published a trove of leaked State Department cables, Senator Joe Lieberman pressured nearly every service Wikileaks used to ban the organization. Wikileaks was deplatformed by its web host, its domain name service, and even its data visualization software provider. Bank of America, VISA, MasterCard, PayPal, and Western Union all prohibited donations to Wikileaks. Wikileaks was able to quickly move to European web hosts and domain name services beyond the reach of Senator Lieberman. But even Swiss bank PostFinance refused Wikileaks’ business. Unlike foreign web hosting and domain registration services, foreign banks are still part of a global financial system for which America largely sets the rules.Denied access to banking services, Wikileaks became an early adopter of Bitcoin. Simply sending money to a small organization was simple enough that even in 2011, Bitcoin could offer Wikileaks a viable alternative to the traditional financial system. It also probably helped that Wikileaks’ cause was popular with the sort of people already using Bitcoin in 2011.While cryptocurrency has come a long way in the past decade, adoption is still limited, and alternatives to traditional methods of raising capital are still in their infancy. Bitcoin offered Wikileaks a way out, and some OnlyFans content creators may turn to decentralized alternatives. But as a business, OnlyFans remains at the mercy of the banking industry. Financial intermediaries cannot stamp out disfavored speech, but they can cap the size of platforms that host it. Sitting behind and above the commercial internet, payment processors and banks retain a unique capability to set rules for platforms, and, in turn, platform users.Will Duffield is a Policy Analyst at the Cato InstituteTechdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.
|
![]() |
by Tim Cushing on (#5Q1YN)
Clearview is currently being sued by a small percentage of its database of scraped personal info. It is also being sued by a few state officials over privacy law violations. It is (also) also being side-eyed closely by the federal government, which has not initiated an official investigation, but has expressed its disappointment in legislative ways.One of dozens of lawsuits Clearview is hopefully being eventually bankrupted by has resulted in a bit of the old intimidation tactics. Clearview has made some inadvertently amusing arguments in court about its alleged right to do whatever the hell it wants to amass secondhand data as well as market access to whoever the hell it wants whenever the hell it wants. We'll see how that all plays out. In the meantime, Clearview is hoping to make others as miserable as it is. And if that means doing terrible things to long-recognized First Amendment protections, so be it.Transparency advocate Open The Government has been hit with a subpoena from Clearview, which is defending itself against several plaintiffs alleging state law violations in an Illinois-based class action lawsuit. With its livelihood being barely threatened by an ongoing suit, Clearview has decided to threaten Open the Government, which is not involved in the lawsuit in any way.
|
![]() |
by Daily Deal on (#5Q1YP)
Java is one of the most prominent programming languages today due to its power and versatility. The Premium Java Programming Certification Bundle has 8 courses to help you master the ins and outs of Java programming. Then you'll learn useful software principles, how to ace interviews, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5Q1KQ)
As CIA director, Mike Pompeo decided Julian Assange and Wikileaks should be promoted to Public Enemy #1. With Wikileaks leaking leaked CIA secrets, Pompeo ratcheted up his rhetoric in response to the leaks. Finding himself frustrated by the US government's understandable reluctance to pull the trigger on prosecutions of arguable acts of journalism, the CIA director decided those constitutional concerns could be waved away with the proper national security designation.During a 2017 speech at the Center for Strategic and International Studies, Pompeo -- who supported Wikileaks when it was airing the Democratic National Committee's dirty laundry -- unilaterally decided Assange was a threat unworthy of any constitutional protections.
|
![]() |
by Leigh Beadon on (#5Q0T5)
This week, our first place winner on the insightful side is David with a comment on our post about the Russian government abusing the law to shut down Alexey Navalny's smart voting app:
|
![]() |
by Leigh Beadon on (#5PZY6)
Five Years AgoThis week in 2016, Donald Trump was doubling down on Ted Cruz's argument for blocking the transition of the IANA away from the commerce department (not the only stupid argument on the subject), but the Senate came to its senses and did not support the plan. The House Intelligence Committee released a list of "Snowden's Lies" that was almost entirely false, while the Washington Post was condemning Snowden and we wondered if it would give back its Pulitzer Prize, while Chelsea Manning was facing indefinite solitary confinement after a suicide attempt. The man arrested over KickassTorrents was being blocked from talking to his US attorney, while a former UMG executive was calling for the destruction of the DMCA. And we took a closer look at the more complex reasons for ridiculous bogus takedown demands in mass DMCA filings.Ten Years AgoThis week in 2011, Righthaven was failing to pay attorney fees ordered by the court and we looked at what would happen if the company declared bankruptcy. A lawyer was seeking to wipe out critical anonymous speech, the Authors Guild was trying to play "gotcha" with orphaned works instead of fixing the problem, and we looked at the entertainment industry's coordinated effort to blame third parties for piracy. In Italy, a proposed law would ban people from the internet based on a single accusation of infringement, leading an EU Parliament Member to ask the EU Commission what it would do if the law passed — while the Commissioner was busy asking big copyright to increase its lobbying efforts.Fifteen Years AgoThis week in 2006, Warner Music took the lead in signing deals with YouTube, while Microsoft was launching a YouTube copycat site (leading us to remember that, a few months prior, Bill Gates had explained why Microsoft would never try to do that). A Belgian court ordered Google to stop linking to news websites, prompting the company to fight back and hit a wall. Yahoo was timidly experimenting with DRM-free music, while the press was noticing how disappointing the early online video offerings from Apple and Amazon were. And, twenty years after the fact, The Knack noticed that their song My Sharona had been sampled by Run DMC and filed a very late lawsuit.
|
![]() |
by Timothy Geigner on (#5PZ9R)
We've written a couple of times about the Consorzio di Tutela della Denominazione di Origine Controllata Prosecco, whom I have nicknamed "The Prosecco People" because I'm not typing that every time. This organization with the sole goal of protecting the "Prosecco" name from being used, or nearly used, by anyone else has taken this mission to extreme lengths historically. Serving as examples were such times as The Prosecco People opposing a French company's non-alcoholic sparkling wine brand dubbed "Nosecco", as well as bullying a pet treat company that created a drink for pets called "Pawsecco". In both cases, if you can find any real reason to worry about public confusion as to the source of those goods, you're a crazy person.But those examples were parodies and puns that at least nodded at the Prosecco product. The latest bullying attempt to protect the Prosecco brand comes from Italian government ministers and targets the EU's consideration for protected status of a Croatian sweet wine called "Prosek."
|
![]() |
by Mike Masnick on (#5PZ62)
Texas and Florida. Florida and Texas. Two states with governors who have decided that culture warrioring and "owning the libs" is way more important than the Constitution they swore to protect and uphold. As you'll recall, last month Texas Governor Greg Abbott decided to use the internet services he hates to livestream his signing of the clearly unconstitutional HB20 that seeks to block social media sites from moderating how they see fit.As we had pointed out, Florida had beaten Texas to the punch on that and a court had already tossed out the bill as an unconstitutional infringement of 1st Amendment rights. Now a state that was looking to actually do things correctly would maybe see that and recognize that maybe it's not worth wasting millions of taxpayer dollars to do the exact same thing, but Texas went ahead.And, now, the same two organizations that sued to strike down Florida's law, NetChoice and CCIA, have similarly sued to strike down Texas' law.
|
![]() |
by Tim Cushing on (#5PZ2S)
Well, here's something unexpected, delivered in a somewhat tone-deaf fashion. The Minnesota Department of Public Safety has partnered with a mother whose son was killed by a Minnesota police officer to hopefully reduce the number of times people are killed by police officers for following instructions during traffic stops. (h/t @Ktech)
|
![]() |
by Jonathan Zittrain on (#5PYYH)
I’m grateful to Techdirt and the EFF for this series. There are so many legitimately difficult issues around content moderation at the application layer—that is, on (and usually by) platforms like Facebook and Twitter. And they can crowd out the problems around the corner that are at least as difficult: those of code and content moderation at the infrastructural level, such as the wholesale platforms (such as Amazon Web Services) that host websites; domain name registries that support the use of domain names; and app stores from Apple and Google that largely determine what applications users can choose to run.To be sure, the line between infrastructure and application can be blurry. For example, text messaging via SMS is offered as a bundled service by providers of mobile phone services like AT&T and Verizon. These services are usually thought of as infrastructural—while users of iPhones experience iMessage as an application that supplants SMS for inter-iOS text exchanges with a fall back to SMS for participants who don’t use iOS.Perhaps the distinction lies as much in the dominance of a service as it does in its position within a layered stack. Informally surveying students in courses on digital governance, I’ve found increasing appetite for content moderation by Facebook of users’ news feeds and within Facebook groups—say to remove blatant election disinformation such as asserting the polls will be available on the wrong day, to depress turnout—while largely refusing to countenance moderation by telecommunications companies if the same disinformation were sent by SMS. Facebook Messenger remains a tossup.However fuzzy the definitions, infrastructural moderation is a natural follow-on to application-level moderation. Historically there hasn’t been much pressure for infrastructural moderation given that many critics and companies traditionally saw “mere” application-layer moderation as undesirable—or, at least, as a purely private matter for whoever runs the application to decide upon within its terms of service for its users.Part of that long-term reluctance for public authorities to pressure companies like Facebook for greater moderation has been a solicitude for how difficult it is to judge and deal with flows of user-submitted content at scale. When regulators thought they were choosing between a moderation requirement that causes a company to shut down its services, or abstention that allowed various harms to accrue, many opted for the second.For example, the “notice-and-takedown” provisions around the U.S.’ 1998 Digital Millennium Copyright Act—which have encouraged content aggregators like YouTube to take down allegedly copyright infringing videos and music after a claim has been lodged—are, for all the instances of wrongly-removed content, comparatively light touch. Major services eventually figured out that they could offer claimants a “monetize” button, so that content could stay up and some ad revenue from it could be directed to presumed copyright holders rather than, say, to whoever uploaded the video.And, of course, the now widely-debated Section 230 of the Communications Decency Act, of the same vintage as the DMCA, flatly foreclosed many avenues of potential legal liability for platforms for illegal content other than copyrighted material, such as defamatory statements offered up by some users about other users.As the Internet entered the mainstream, aside from the acceptance of content moderation at scale as difficult, and the corresponding reluctance to impinge upon platforms’ businesses, there was a wide embrace of First Amendment values as articulated in Supreme Court jurisprudence of the 1960s and 70s. Simplifying a little, this view allows that, yes, there could be lots of bad speech, but it’s both difficult and dangerous to entrust government to sift out the bad from the good, and the general solution to bad speech is more speech. So when it came to online speech, a marketplace-of-ideas-grounded argument I call the “rights” framework dominated the landscape.That framework has greatly eroded in the public consciousness since its use to minimize Internet companies’ liabilities in the late 1990s and early 2000s. It’s been eclipsed by what I call the “public health” framework. I used the label before it became a little too on the nose amidst a global pandemic, but the past eighteen months’ exigencies are a good example of this new framework. Rights to, say, bodily integrity, so hallowed as to allow people to deny the donation of their bodily organs when they die to save others’ lives, yield to a more open balancing test when it’s so clear that a “right” to avoid wearing a mask, or to take a vaccination, can have clear knock-on effects on others’ health.In the Internet context, there’s been a recognition of the harms that flow from untrammeled speech—and the viral amplification of the same—made possible at scale by modern social media.It was, in retrospect, easy for the Supreme Court to extol the grim speech-affirming virtue of allowing hateful placards to be displayed on public sidewalks adjacent to a private funeral (as the Westboro Baptist Church has been known to do), or anonymous pamphlets to be distributed on a town common or at a public meeting, despite laws to the contrary.But the sheer volume and cacophony of speech from unknown sources that bear little risk of enforcement against them even if they should cross a line, challenges those easy cases. Whether it's misinformation, for which the volume and scope can be so great as to have people either be persuaded by junk or, worse, wrongly skeptical of every single source they encounter, or harassment and abuse that silences the voices of others, it’s difficult to say that the marketplace of ideas is outing only the most compelling ones.With a public health newly ascendant for moderation at the application layer, we see new efforts by platform operators to tighten up their terms of service if only on paper, choosing to forbid more speech over time. That includes speech that, if the government were to pursue it, would be protected by the First Amendment (a lot of, say, misinformation about COVID and vaccines would fit this category of “lawful but awful”).Not coincidentally, regulators have a new appetite for regulation, whether because they’re convinced that moderation at scale, with the help of machine learning tools and armies of moderators, is more possible than before, or that there’s a genuine shift in values or their application that militates towards interventionism in the name of public health, literally or metaphorically.Once the value or necessity of moderation is accepted at the application layer, the inevitable leakiness of it will push the same kinds of decisions onto providers of infrastructure. One form of leakiness is that there will be social media upstarts who try to differentiate their services on the basis of moderating less, such as Parler. That, in turn, confronted Apple and Google, operating their respective app stores for iOS and Android, to consider whether to deny Parler access to those stores unless it committed to achieve minimum content moderation standards. The companies indeed removed the Parler app from their stores, while Amazon, which offers wholesale hosting services for many otherwise-unrelated web sites and online services, suspended its hosting of Parler in the wake of the January 6th insurrection at the Capitol.Another form of leakiness of moderation is within applications themselves, as the line between publicly-available and private content becomes more blurred. Facebook aims to apply its terms of service not only to public posts, but also to those within private groups. To enforce its rules against the latter, Facebook either must peek at what’s going on within them—perhaps only through automated means—or field reports of rule violations from members of the groups themselves.Beyond private groups are services shaped to appear more as private group messaging than as social networks at all. Whether Facebook’s own Messenger, with new options for encryption, or through other apps such as Telegram, Facebook’s Whatsapp, or the open-source Signal, there’s the prospect that strangers sharing a cause can meet one another on a social network and then migrate to large private messaging groups whose infrastructure is encrypted.Indeed, there’s nothing stopping people from choosing to gather and have a conversation within World of Warcraft, merely admiring the view of the game’s countryside as they chat about sports, politics, or alleged terrorist schemes. A Google Doc could serve the same function, if with less of a scenic backdrop. At that point content moderation either must be done through exceptions to any encryption that’s offered—so-called backdoors—or through bot-driven client-side analysis of what people are posting before it moves from, say, their smartphones onto the network.That’s a rough description of what Apple has been proposing to do in order to monitor users’ private iCloud accounts for illegal images of child sexual abuse, using a combination of privileged acccess to data from the phone and data of existing abusive images collected by child protection agencies to ascertain matches. Apple has suspended plans to implement this scanning after an outcry from some members of the technical and civil liberties communities. Some of that pushback has been around implementation details and worries about misidentification of lawful content, and Apple has offered rejoinders to those worries.But more fundamentally, the civil liberties worry is that this form of scanning, once a commonplace for a narrow and compelling purpose, will find new purposes, perhaps against political dissidents, whose speech—and apps—can readily be deemed illegal by a government that does not embrace the rule of law. This happened recently when the Russian government prevailed on both Apple and Google to remove an app by opposition leader Aleksei Navalny’s movement designed to encourage strategic voting against the ruling party.We’ve seen worries about scope creep around the formation and development of ICANN, a non-profit that manages the allocation of certain Internet-wide identifiers, such as top-level domains like .com and .org. Through its ability to choose who operates domain registries like those, ICANN can require such registries to in turn compel domain name registrants to accept a dispute resolution process if someone else makes a trademark-like claim against a registration (that’s how, early on, the holder of madonna.com was dispossessed of the name after a complaint by Madonna).The logical concern was that the ability for registries to yank domain names under pressure from regulators would go beyond trademark-like disputes over the names themselves, and into the activities and content of the sites and services those names point to. For the most part that hasn’t happened—at least not through ICANN. Hence the still surprisingly common operation of domains that operate command-and-control networks for botnets or host copyright-infringing materials.Nonetheless, if content moderation is important to do, the fact is that it will be difficult to keep it to the application layer alone. And today there is more of a sense that there isn’t such a thing as the neutral provision of services. Before, makers of products ranging from guns to VCRs offered arguments like those of early Internet platforms: to have them liable for what their customers do would put them out of business. They disclaimed responsibility for the use of their products for physical assault or copyright infringement respectively since those took place long after they left the makers’ factories and thus control, and there weren’t plausible ways to shape the technologies themselves at the factory to carve away future bad uses while preserving the good ones.As the Internet has allowed products to become services, constantly checking in with and being adapted by their makers, technology vendors don’t say goodbye to their work when it leaves a factory. Instead they are offering it anew every time people use it. For those with a public health perspective, the ability of vendors to monitor and shape their services continuously ought at times to be used for harm reduction in the world, especially when those harms are said to be uniquely made possible by the services themselves.Consider a 2021 Texas law allowing anyone to sue anyone else for at least $10,000 for “aiding” in the provision of most abortions. An organization called Texas Right to Life created a web site soliciting “whistleblowers” to submit personal information of anyone thought to be a suitable target under the new law—a form of doxxing. The site was originally hosted by GoDaddy, which pulled the plug on the basis that it collected information about people without their consent.Now consider the loose group of people calling themselves Sedition Hunters, attempting to identify rioters at the Capitol on January 6th. They too have a web site linking out to their work. Should they solicit tips from the public—which at the moment they don’t do—and should their site host treat them similarly?Those identifying with a rights framework might tend to think that in both instances the sites should stay up. Those worrying about private doxxing of any kind might think they should be taken down. And others might draw distinctions between a site facilitating application of a law that, without a future reversal by the Supreme Court, is clearly unconstitutional, and those uniting to present authorities with possible instances of wrongdoing for further investigation.As the public health framework continues to gain legitimacy, and the ability of platforms to intervene in content at scale grows, blanket invocations of rights will not alone blunt the case for content moderation. And the novelty of regulating at the infrastructural level will not long hold back the pressures that will follow there, especially as application-layer interventions begin to show their limits. Following in the model of Facebook’s move towards encryption, there could come to be infrastructural services that are offered in a distributed or anonymized fashion to avoid the possibility of recruitment for regulation. But as hard as these problems are, they seem best solved through reflective consensus rather than technical fiat in either direction.Jonathan Zittrain is George Bemis Professor of Law and Professor of Computer Science at Harvard University, and a co-founder of its Berkman Klein Center for Internet & Society.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.
|
![]() |
by Tim Cushing on (#5PYWA)
The National Association of Criminal Defense Lawyers has just released an in-depth examination of predictive policing. Titled "Garbage In, Gospel Out," it details the many ways bad data based on biased policing has been allowed to generate even more bad data, allowing officers to engage in more biased policing but with the blessing of algorithms.Given that law enforcement in this country can trace itself back to pre- and post-Civil War slave patrols, it's hardly surprising modern policing -- with all of its tech advances -- still disproportionately targets people of color. Operating under the assumption that past performance is an indicator of future results, predictive policing programs (and other so-called "intelligence-led" policing efforts) send officers to places they've already been several times, creating a self-perpetuating feedback loop that ensures the more often police head to a certain area, the more often police will head to a certain area.As the report [PDF] points out, predictive policing is inadvertently accurately named. It doesn't predict where crime will happen. It only predicts how police will behave.
|
![]() |
by Daily Deal on (#5PYWB)
The Unreal and Unity Game Development for Beginners Bundle has 6 courses to help you master game development and build your own games. You'll learn about Unreal Engine, which is one of the most popular engine choices available for games. You'll also learn the basic concepts, tools, and functions that you will need to build fully functional games with C# and the Unity game engine. The bundle is on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Glyn Moody on (#5PYSJ)
Back in 2013, Techdirt started writing about the boring-sounding Investor-State Dispute Settlement (ISDS) system. It was so boring, we decide to use a better term for it: corporate sovereignty. It's an appropriate name, since this system of secret courts effectively places companies above a government, by allowing them to sue a nation if the latter takes actions or brings in laws that might adversely affect their profits. It was originally designed to protect companies that invested in unstable parts of the world, and to discourage things like expropriation by corrupt officials. But clever lawyers soon realized it was much more general than that, and could be used as a weapon against even the most powerful -- and stable -- nations.It allows deep-pocketed companies -- typically multinational corporations -- to threaten governments with big fines if they pass laws or make decisions that aren't to the companies' liking. That includes actions that are clearly justified and in the interests of the country's citizens. For example, over the years Techdirt has written about how corporate sovereignty was used to threaten governments that wanted to protect public health, even measures to tackle COVID-19.In 2015, this blog warned that the TAFTA/TTIP trade agreement under discussion then would allow companies to challenge actions taken to protect the environment, such as bringing in laws to tackle the climate crisis. TAFTA/TTIP never happened, so fossil fuel companies have now turned to other treaties to demand over $18 billion as "compensation" for the potential loss of future profits as a result of recent decisions taken around the world to tackle climate change. Global Justice Now has a summary:
|