Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-19 08:46
Daily Deal: The Web Development Crash Course Bundle
The Web Development Crash Course Bundle has 6 courses to help you become a master programmer. You'll learn about C++, Bootstrap, Modern OpenGL, HTML, and more. The courses will teach you how to create websites, how to program for virtual reality, how to create your own games, and how to create your own apps. The bundle is on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Bizarre Magistrate Judge Ruling Says That If Facebook Deletes An Account, It No Longer Needs To Keep Details Private
There have been a bunch of slightly wacky court rulings of late, and this recent one from magistrate judge Zia Faruqui definitely is up there on the list of rulings that makes you scratch your head. The case involves the Republic of Gambia seeking information on Facebook accounts that were accused of contributing to ethnic genocide of the Rohingya in Myanmar. This situation was -- quite obviously -- horrible, and it tends to be the go-to story for anyone who wants to show that Facebook is evil (though I'm often confused about how people often seem more focused on blaming Facebook for the situation than the Myanmar government which carried out the genocide...). Either way, the Republic of Gambia is seeking information from Facebook regarding the accounts that played a role in the genocide, as part of its case at the International Court of Justice.Facebook, which (way too late in the process) did shut down a bunch of accounts in Myanmar, resisted demands from Gambia to hand over information on those accounts noting, correctly, that the Stored Communications Act likely forbids it from handing over such private information. The SCA is actually pretty important in protecting the privacy of email and messages, and is one of the rare US laws on the books that is actually (for the most part) privacy protecting. That's not to say it doesn't have its own issues, but the SCA has been useful in the past in protecting privacy.The ruling here more or less upends interpretations of the SCA by saying once an account is deleted, it's no longer covered by the SCA. That's... worrisome. The full ruling is worth a read, as you'll know you'll be in for something of a journey when it starts out:
FCC's 'New' Robocall Plan Isn't Particularly New, Won't Seriously Reduce Robocalls
So for a long time the FCC has made "fighting robocalls" one of their top priorities. Though with Americans still receiving 132 million Robocalls every single day, you may have noticed that these efforts don't usually have the impact they claim. Headlines about "historic" or "record" FCC robocall fines usually overshadow the agency's pathetic failure to collect on those fines, or the fact that thanks to recent Supreme Court rulings, the agency is boxed in as to which kind of annoying calls and spam texts it can actually police.Which brings us to last week, when the agency announced yet another major action, this time proposed rule updates that would make it harder on the "gateway" companies (which connected overseas callers to U.S. phone networks) and the smaller phone operators that are the origins of so much of the problem. While the FCC's plan made a lot of headlines, experts were quick to note that most of the improvements were still far from being implemented:
Court Tells Child Sexual Abuse Investigators That The Private Search Warrant Exception Only Works When There's A Private Search
Private searches that uncover contraband can be handed off to law enforcement without the Fourth Amendment getting too involved. Restrictions apply, of course. For instance, a tech repairing a computer may come across illicit images and give that information to law enforcement, which can use what was observed in the search as the basis for a search warrant.What law enforcement can't do is ask private individuals to perform searches for it and then use the results of those searches to perform warrantless searches of their own. A Ninth Circuit Appeals Court case [PDF] points out another thing law enforcement can't do: assume (or pretend) a private search has already taken place in order to excuse its own Fourth Amendment violation. (h/t Rianna Pfefferkorn)Automated scanning of email attachments led to a series of events that culminated in an unlawful search. Here's the court's description of this case's origination:
Seuss Estate And ComicMix Copyright Case Settles In The Saddest Possible Way
Readers here will know that we've followed the trademark and copyright lawsuit filed by the estate of Dr. Seuss against ComicMix LLC, creators of the mashup book Oh, the Places You'll Boldly Go! The entire thing has been a multi-year rollercoaster designed to be serpentine, with ComicMix arguing that the mashup book was transformative and covered by fair use, and winning on that front, only to have the copyright portion of the argument overturned on appeal. Go and read Cathy Gellis' writeup on the appeal; it's incredibly detailed and informative.But if anyone was hoping to see this case progress up the federal court ranks, they will be both disappointed and sad. Disappointed because the parties have now settled the case with ComicMix agreeing to acknowledge that the book did, in fact, infringe on Suess' copyrights.
Seuss Estate And ComicMix Copyright Case Settles In The Saddest Possible Way
Readers here will know that we've followed the trademark and copyright lawsuit filed by the estate of Dr. Seuss against ComicMix LLC, creators of the mashup book Oh, the Places You'll Boldly Go! The entire thing has been a multi-year rollercoaster designed to be serpentine, with ComicMix arguing that the mashup book was transformative and covered by fair use, and winning on that front, only to have the copyright portion of the argument overturned on appeal. Go and read Cathy Gellis' writeup on the appeal; it's incredibly detailed and informative.But if anyone was hoping to see this case progress up the federal court ranks, they will be both disappointed and sad. Disappointed because the parties have now settled the case with ComicMix agreeing to acknowledge that the book did, in fact, infringe on Suess' copyrights.
Filecoin Foundation Ensuring That SecureDrop Can Continue To Help Whistleblowers And Journalists
Earlier this year we were excited to see the Filecoin Foundation give the Internet Archive its largest donation ever, to help make sure that the Internet Archive is both more sustainable as an organization, and that the works it makes available will be more permanently available on a more distributed, decentralized system. The Internet Archive is a perfect example of the type of organization that can benefit from a more distributed internet.Another such organization is the Freedom of the Press Foundation, which, among its many, many projects, maintains and develops SecureDrop, the incredibly important tool for journalists and whistleblowers, which was initially developed in part by Aaron Swartz (as DeadDrop). So it's great to see that the Freedom of the Press Foundation has now announced the largest donation it has ever received, coming from the Filecoin Foundation for the Distributed Web (the sister organization of the Filecoin Foundation):
Court Awards Qualified Immunity To Florida Deputy Who Arrested A Driver For An 'I EAT ASS' Window Decal
When the First Amendment meets a law enforcement officer's ability to be offended on the behalf of the general public, the First Amendment tends to lose.The ability to be a proxy offendee affords officers the opportunity to literally police speech. They're almost never in the right when they do this. But they almost always get away with it. That's why a Texas sheriff felt comfortable charging a person sporting a "FUCK TRUMP" window decal with disorderly conduct. That's why a Tennessee cop issued a citation for a stick-figures-in-mid-coitus "Making my family" window decal.And that's why a Florida law enforcement officer pulled over and arrested a man for the "I EAT ASS" sticker on his window. According to Deputy Travis English's arrest report, he noticed the sticker and assumed it violated the state's obscenity law. He was, of course, wrong about this. But he called his supervisor for clarification and was assured (wrongly) that this sticker violated the law.He offered to let the driver, Dillon Webb, be on his way if he removed the word "ASS" from the decal. Webb refused, (correctly) asserting his First Amendment right to publicize his non-driving activities. English's report is full of dumb things (and, ironically, some incorrect English). Here's what he had to say about the stop and the driver's assertion of his Constitutional rights. (All errors in the original.)
Trump Asks Court To Reinstate His Twitter Account ASAP
There were a bunch of headlines this weekend claiming that Donald Trump had just "sued" Twitter to get his account reinstated. This is untrue. There were also some articles suggesting that he was using Florida's new social media law as the basis of this lawsuit. This is also false (what the hell is wrong with reporters these days?).Trump actually sued back in July and it was widely covered then. And the basis of that lawsuit was not Florida's law, but rather a bizarrely twisted interpretation of the 1st Amendment.What happened on Friday was that in that ongoing case, Trump filed for a preliminary injunction that would (if granted) force Twitter to reinstate Trump's account. This is not at all likely to be granted. The motion for the injunction is laughably bad. It's even worse than the initial complaint (which was hilariously bad). It does make reference to Florida's law -- which has already been held to be unconstitutional -- but it's certainly not using that as a key part of its argument.As for this motion, it's just a lawyerly Hail Mary attempt by lawyers who are in way too deep, hoping that maybe they'll get lucky with a judge who doesn't care about how the 1st Amendment actually works. It's a mishmash of confused and debunked legal theories about the 1st Amendment and Section 230, but the crux of it is that it violated then President Donald Trump's rights when Twitter shut down his account, because Twitter was acting as the government. Yes. The argument is so stupid as to need repeating. The underlying argument is that government actor, Twitter illegally censored private citizen President Donald Trump, taking away his 1st Amendment rights through prior restraint.
Inspector General Says CBP's Device Search Program Still A Mess, Still (Ironically) Mostly Undocumented
The CPB continues to increase the number of electronic devices (at least temporarily) seized and searched at border crossings and international airports. Basic searches -- ones that don't involve any additional tech or software -- can be performed for almost any reason. For deeper searches, the CBP needs only a little bit more: articulable suspicion.Even though it's only a very small percentage of the total, it continues to increase, both in total numbers and as a percentage of the whole.
Daily Deal: The Premier All AWS Certification Training Bundle
Amazon Web Services (AWS) has forever changed the way businesses operate. Enterprises, big or small, look to the AWS platform for their cloud and data storage needs. And as the demand for AWS rises, so as the demand for competent AWS professionals. This Premier All AWS Certification Training Bundle gives you lifetime access to 7 prep courses to prepare you for the essential AWS certifications: Certified Cloud Practitioner, Solutions Architect, Developer Associate, SysOps Administrator, and more. These courses cover the skillset required and give a simulation of the actual exams to prepare and help you pass and get certified in no time. The bundle is on sale for $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Apparently Someone Doesn't Want You To Buy Our Copymouse Shirt
You may remember that, a couple years ago, our line of Copying Is Not Theft t-shirts and other gear was suddenly taken down by Teespring (now just called Spring) — first based on the completely false assertion that it contained third-party content that we didn't have the rights to use, then (after a very unhelpful discussion with their IP Escalations department) because it apparently violated some other policy that they refused to specify. That prompted us to open a new Techdirt Gear store on Threadless, where we've launched many of our old designs and all our new ones since the takedown. But we also kept the Spring store active for people who preferred it and for some old designs that we hadn't yet moved — and a few weeks ago the site's takedown regime struck again, wiping out our line of Copymouse gear that had lived there for nearly five years. So, once again, we've relaunched the design over on Threadless:Of course, this takedown is a little different from the previous one. The Copying Is Not Theft gear contains no third-party material whatsoever, and there was simply no legitimate reason for Spring to have removed it — and they refused to even offer any explanation of what they thought that reason might be. In the case of Copymouse, it's obvious that it makes use of a particular logo, though in an obviously transformative manner for the purpose of commentary. So, yes, there is an argument for taking it down. It's just not a strong argument, since the design clearly falls within the bounds of fair use for the purposes of criticism and commentary, and it's hard to argue that there's any likelihood of confusion for consumers: nobody is going to think it's a piece of official Disney merchandise. Nevertheless, it's at least somewhat understandable that it caught the attention of either an automatic filter or a manual reviewer, and given the increased scrutiny and attempts to create third-party liability falling upon services that create products with user-uploaded artwork, it's no real surprise that a massive site like Spring errs on the side of caution (indeed, we won't be too surprised if the design ends up being removed from Threadless as well). It's still disappointing though, and even more importantly, it's yet another example of why third-party liability protections are so very, very important, and how when those protections are not strong, sites tend towards overblocking clearly legitimate works.But for now, at least, you can still get your Copymouse gear on Threadless while we all wait to see if history repeats itself and the design needs an update in 2023.
CNN Shutting Down Its Facebook In Australia Shows How Removing 230 Will Silence Speech
It remains perplexing to me that so many people -- especially among the Trumpist world -- seem to believe that removing Section 230 will somehow make websites more likely to host their incendiary speech. We've explained before why the opposite is true -- adding more liability for user speech means a lot fewer sites will allow user speech. But now we have a real world example to show this.Last month, in a truly bizarre ruling, the Australian High Court said that news publishers should be liable for comments on social media on their own posts to those social media platforms. In other words, if a news organization published a story about, say, a politician, and then linked to that story on Facebook, if a random user defamed the politician in the comments on Facebook... then the original publisher could face liability for those comments.It didn't take long for Rupert Murdoch (who has been pushing to end Section 230 in the US) to start screaming about how he and other media publishers now need special intermediary protections in Australia. And he's not wrong (even if he is hypocritical). But, even more interesting is that CNN has announced that it will no longer publish news to Facebook in Australia in response to this law:
Neiman Marcus Breach Exposes Data Of 4.6 Million Users
Another day, another massive privacy breach nobody will do much about. This time it's Neiman Marcus, which issued a statement indicating that the personal data of roughly 4.6 million U.S. consumers was exposed thanks to a previously undisclosed data breach that occurred last year. According to the company, the data exposed included login in information, credit card payment information, virtual gift card numbers, names, addresses, and the security questions attached to Neiman Marcus accounts. The company is, as they always are in the wake of such breaches, very, very sorry:
Accidentally Unsealed Document Shows Feds Are Using Reverse Warrants To Demand Info On Google Searches
Not only is the government using "reverse warrants" to rummage around in your Google stuff, it's also using "keyword warrants" to cast about blindly for potential suspects.Reverse warrants (a.k.a. geofence warrants) allow the government (when allowed by courts) to work its way backwards from a bulk collection of data to potential suspects by gathering info on all phone users in the area of a suspected crime. The only probable cause supporting these searches is the pretty damn good probability Google (and others but mostly Google) have gathered location data that can be tied to phones. Once a plausible needle is pulled from the haystack, the cops go back to Google, demanding identifying data linked to the phone.This search method mirrors another method that's probably used far more often than it's been exposed. As Thomas Brewster reports for Forbes, an accidentally unsealed warrant shows investigators are seeking bulk info on Google users using nothing more than search terms they think might be related to criminal acts.
Perfect Timing: Twitch Gets Compromised With Voluminous Leak Of Data Via Torrent
It's no secret that Amazon-owned Twitch has had a rough go of it for the past year or so. We've talked about most, if not all, of the issues the platform has created for itself: a DMCA apocalypse, a creative community angry about not being informed over copyright issues, unclear creator guidelines for content that result in punishment from Twitch while some creators happily test the fences on those guidelines, and further and ongoing communication breakdowns with creators. All of that, mind you, has taken place over the last 12 months. It's been bad. Really bad!But great news: now it's even worse! Someone managed to get into the Twitch platform and leak it. As in pretty much all of it. And even some information on a Steam-rival Amazon is planning to release. Seriously.
Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems
Summary: In its 15 years as a micro-blogging service, Twitter has given users more characters per tweet, reaction GIFs, multiple UI options, and the occasional random resorting of their timelines.The most recent offering was to give users the option to create posts designed to be swept away by the digital sands of time. Early in 2020, Twitter announced it would be rolling out "Fleets" — self-deleting tweets with a lifespan of only 24 hours. This put Twitter on equal footing with Instagram's "Stories" feature, which allows users to post content with a built-in expiration date.In the initial, limited rollout of Fleets, Twitter reported that the feature showed advantages over the platform's standard offering. Twitter Comms tweeted that initial testing looked promising, stating that it was seeing "less abuse with Fleets" with only a "small percentage" of Fleets being reported each day.Whether this early indicator was a symptom of the limited rollout or users viewing self-deleting abuse as a problem that solves itself, the wider rollout wasn't nearly as easy as earlier indicators nor was it relatively abuse free. Fleet’s full debut arrived in the wake of an incredibly contentious U.S. presidential election — one marred by election interference accusations and a constant barrage of misinformation. The full rollout also came after nearly a year of a worldwide pandemic, which resulted in a constant flow of misinformation across multiple social media platforms globally.While amplification of misinformation contained in Fleets was somewhat tempered by their innate ephemerality, as well as very limited interaction options, it seemed unclear how — or how well — Twitter was handling moderating misinformation spread by the new communication option. Extremism researcher Marc-Andre Argentino was able to send out a series of "fleets" containing misinformation and banned URLS, noting that Twitter only flagged one that asserted a link between the virus and cell phone towers.Samantha Cole reported other Fleet moderation issues. Writing for Motherboard, Cole noted that apparent glitches were allowing users to see Fleets from people they had blocked, as well as Fleets from people who had blocked them. Failing to maintain settings that users set up to block or mute others created more avenues for abuse. Cole also pointed out that users weren't being notified when their tweets were added to Fleets, providing abusive users with another option to harass while the targets of abuse remain unaware.Company Considerations:
AT&T Set Up And Paid For OAN Propaganda Network; Yet Everyone Wants To Scream About Facebook
We've noted for a while there's a weird myopia occurring in internet policy. As in, "big tech" (namely Facebook, Google, and Amazon) get a relentless amount of Congressional and policy wonk attention for their various, and sometimes painfully idiotic behaviors. At the same time, just an adorable smattering of serious policy attention is being given to a wide array of equally problematic but clearly monopolized industries (banking, airlines, insurance, energy), or internet-connected sectors that engage in many of the same (or sometimes worse) behaviors, be they adtech or U.S. telecom.Case in point: while the entirety of U.S. policy experts, lawmakers, journalists, and academics (justifiably) fixated on the Facebook whistleblower train wreck, a story popped up about AT&T. Basically, it showcased how AT&T not only provided the lion's share of funding for the propaganda-laden OAN cable TV "news" network, the entire thing was AT&T's idea in the first place, and simply wouldn't exist without AT&T's consistent support:
AT&T Set Up And Paid For OAN Propaganda Network; Yet Everyone Wants To Scream About Facebook
We've noted for a while there's a weird myopia occurring in internet policy. As in, "big tech" (namely Facebook, Google, and Amazon) get a relentless amount of Congressional and policy wonk attention for their various, and sometimes painfully idiotic behaviors. At the same time, just an adorable smattering of serious policy attention is being given to a wide array of equally problematic but clearly monopolized industries (banking, airlines, insurance, energy), or internet-connected sectors that engage in many of the same (or sometimes worse) behaviors, be they adtech or U.S. telecom.Case in point: while the entirety of U.S. policy experts, lawmakers, journalists, and academics (justifiably) fixated on the Facebook whistleblower train wreck, a story popped up about AT&T. Basically, it showcased how AT&T not only provided the lion's share of funding for the propaganda-laden OAN cable TV "news" network, the entire thing was AT&T's idea in the first place, and simply wouldn't exist without AT&T's consistent support:
Does An Internet Infrastructure Taxonomy Help Or Hurt?
We've been running our Greenhouse discussion on content moderation at the infrastructure level for a bit now, and normally all of the posts for these discussions come from expert guest commentators. However, I'm going to add my voice to the collection here because there's one topic that I haven't seen covered, and which is important, because it comes up whenever I'm talking to people about content moderation at the infrastructure level: do we need a new taxonomy for internet infrastructure to better have this discussion?The thinking here is that the traditional OSI model of the internet layers is somewhat outdated and not particularly relevant to discussions such as this one. Also, it's hellishly confusing as is easily demonstrated by this fun Google box of "people also ask" on a search on "internet layers."Clearly, lots of people are confused.Even just thinking about what counts as infrastructure can be confusing. One of my regular examples is Zoom, the video conferencing app that has become standard and required during the COVID pandemic: is that infrastructure? Is that edge? It has elements of both.But the underlying concern in this entire discussion is that most of the debate around content moderation is about clear edge providers: the services that definitely touch the end users: Facebook, Twitter, YouTube, etc. And, as I noted in my opening piece, there is a real concern that because the debate focuses on those companies, and there appears to be tremendous appetite for policy making and regulating those edge providers, that any new regulations may not realize how they will also impact infrastructure providers, where the impact could be much more seismic.Given all that, many people have suggested that a "new taxonomy" might be useful, to help "carve out" infrastructure services from any new regulations regarding moderation. It's not hard to understand a concept like "maybe this rule should apply to social media sites, but not to domain registrars" for example.However, the dangers in building up such a taxonomy greatly outweigh any such benefits. First, as noted earlier, any new taxonomy is going to be fraught with difficult questions. It's not always clear what really is infrastructure these days. We've already discussed how financial intermediaries are, in effect infrastructure for the internet these days -- and that's a very different participant than the traditional OSI model of internet layers. Same with advertising firms. And I've already mentioned Zoom as a company that clearly has an edge component, but feels more like it should be considered infrastructure. Part of that is just the nature of how the internet works, in which some of the layers are merged. Marc Andreessen famously noted that software eats the world, but the internet itself is subsuming more and more traditional infrastructure as well -- and that creates complications.On top of that, this is an extremely dynamic world. Part of the reason why the OSI model feels obsolete is because it is. Things change, and they can change fairly rapidly on the internet. So any taxonomy might be obsolete by the time it's created, and that's extremely dangerous if the plan is to use it for classifying services for the purpose of regulation.The final concern with such a taxonomy is simply that it seems likely to encourage regulatory approaches in places where it's not clear if it's actually needed. If the intent of such a taxonomy is to help lawmakers write a law that only puts its focus on the edge players, that's unlikely how it will remain. Once such a mapping is in place, the temptation (instead) will simply be to create new rules for each layer of the new stack.A new taxonomy may sound good as a first pass, but it will inevitably create more problems than it solves.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
Court Documents Show The FBI Used A Whole Lot Of Geofence Warrants To Track Down January 6th Insurrectionists
The new hotness for law enforcement isn't all that new. But it is still very hot, a better way to amass a list of suspects when you don't have any particular suspect in mind. Aiding and abetting in the new bulk collection is Google, which has a collection of location info plenty of law enforcement agencies find useful.There's very little governing this collection or its access by government agencies. Most seem to be relying on the Third Party Doctrine to save their searches, which may use warrants but do not use probable cause beyond the probability that Google houses the location data they're seeking.Law enforcement agencies at both the local and federal levels have availed themselves of this data, using "geofences" to contain the location data sought by so-called "reverse warrants." Once they have the data points, investigators try to determine who the most likely suspect(s) is. That becomes a bigger problem when the area contained in the geofence contains hundreds or thousands of people who did not commit the crime being investigated.These warrants have been used to seek suspects in incidents ranging from arson to... um... protesting police violence. They've also been used to track down suspects alleged to have raided the US Capitol building on January 6, 2021 -- the day some Trump supporters decided (with the support of several prominent Republicans, including the recently de-elected president) that they could change the outcome of a national election if they committed a bunch of federal crimes.Plenty of those suspects outed themselves on social media. For everyone else, there's reverse warrants, as reported by Wired. (h/t Michael Vario)
Daily Deal: The All-in-One Microsoft, Cybersecurity, And Python Exam Prep Training Bundle
The All-in-One Microsoft, Cybersecurity, and Python Exam Prep Training Bundle has 6 courses to help you learn the skills you need to succeed as a tech professional. The courses cover Python 3, software development, ITIL, cybersecurity, and GDPR compliance. Exams covered include: MTA 98-381, MTA 98-361, the ITIL Foundation v4 exam, PCEP Certified Entry-Level Python Programmer Certification Exam, CompTIA CySA+ Certification Exam, and GDPR CIPP/E Certification Exam. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
If Your Takeaway From Facebook's Whistleblower Is That Section 230 Needs Reform, You Just Got Played By Facebook
Here we go again. Yesterday, the Facebook whistleblower, Frances Haugen, testified before the Senate Commerce Committee. Frankly, she came across as pretty credible and thoughtful, even if I completely disagree with some of her suggestions. I think she's correct about some of the problems she witnessed, and the misalignment of incentives facing Facebook's senior management. However, her understanding of the possible approaches to deal with it is, unfortunately, a mixed bag.Of course, for the Senators in the hearing, it became the expected exercise in confirmation bias, in which they each insisted that their plan to fix the internet would solve the problems Haugen detailed. And, not surprisingly, many of them insisted that Section 230 was the issue, and that if you magically changed 230 and made companies more liable, they'd somehow be better. Leaving aside that there is zero evidence to support this (and plenty of evidence to suggest the opposite is true), the most telling bit in all of this is that if you think changing Section 230 is the answer Facebook agrees with you. It's exactly what Facebook wants. See the smarmy, tone-deaf, self-serving statement the company put out in response to the hearing:
FCC Finally Gets Off Its Ass To Combat SIM Hijacking
So for years we've talked about the growing threat of SIM hijacking, which involves an attacker covertly porting out your phone number from right underneath your nose (sometimes with the help of bribed or conned wireless carrier employees). Once they have your phone identity, they have access to most of your personal accounts secured by two-factor SMS authentication, opening the door to the theft of social media accounts or the draining of your cryptocurrency account. If you're really unlucky, the hackers will harrass the hell out of you in a bid to extort you even further.It's a huge mess, and the both the criminal complaints -- and lawsuits against wireless carriers for not doing more to protect their users -- have been piling up for several years. For several years, Senators like Ron Wyden have been sending letters to the FCC asking the nation's top telecom regulator to, you know, do something. After years of inaction the agency appears to have gotten the message, announcing a new plan to at least consider some new rules to make SIM hijacking more difficult.Most of the proposal involves nudging wireless carriers to do things they should have done long ago. Such as updating FCC Customer Proprietary Network Information (CPNI) and Local Number Portability rules to require wireless carriers adopt secure methods of confirming the customer’s identity before porting out a customer’s phone number to a new device or carrier (duh). As well as requiring that wireless carriers immediately notify you when somebody tries to port out your phone number without your permission (double duh):
Belgian Government Wants To Add Encryption Backdoors To Its Already-Terrible Data Retention Law
Earlier this year, a data retention law passed by the Belgian government was overturned by the country's Constitutional Court. The law mandated retention of metadata on all calls and texts by residents for one year, just in case the government ever decided it wanted access to it. Acting on guidance from the EU Court on laws mandating indiscriminate data retention elsewhere in the Union, the Constitutional Court struck the law down, finding it was neither justified nor legal under CJEU precedent or under Belgium's own Constitution.
Tone Deaf Facebook Did Cripple VR Headsets When Borked BGP Routing Took Down All Of Facebook
For over a year now, we have discussed Facebook's decision to require users of Oculus VR headsets to have active Facebook accounts linked to the devices in order for them to work properly. This decision came to be despite all the noise made by Oculus in 2014, when Facebook acquired the VR company, insisting that this very specific thing would not occur. Karl Bode, at the time, pointed out a number of potential issues this plan could cause, noting specifically that users could find their Oculus hardware broken for reasons not of their own making.
California Cities Experimenting With Civilian Responses To Mental Health Crisis Calls
More cities are adopting an approach to mental health emergency calls that steers calls away from police officers and towards professionals who are trained to respond to mental health crises with something other than force deployment.Early results have shown promise in cities like Denver, Colorado and New York City, New York. These response teams are not only better suited to handling mental health calls, but they're less expensive than sending cops and/or needlessly involving the carceral system. Law enforcement agencies command outsized portions of city budgets. Shifting small portions of these budgets to alternatives like these makes better use of these funds, providing residents with options that are far more effective -- and cost effective -- than the usual method of sending more expensive government employees to respond to problems they're ill-equipped to handle.A couple of cities in California are experimenting with mental health response teams. The teams in use in Sacramento and Oakland were formed by residents in response to the tragic killing of a young man suffering from schizoaffective disorder by police officers.
Facebook's Downtime And Why Protocols Are More Resilient Than Centralized Platforms
As you know by now, much of the tech news cycle yesterday was dominated by the fact that Facebook appeared to erase itself from the internet via a botched BGP configuration. Hilarity ensued -- including my favorite bit about how Facebook's office badges weren't working because they relied on connecting to a Facebook server that could no longer be found (also, how in borking their own BGP, Facebook basically knocked out their own ability to fix it until they could get the right people who knew what to do to have physical access to the routers).But in talking to people who were upset about being cut off from Facebook, Instagram, WhatsApp, or Facebook Messenger, it was a good point to remind people that another benefit of a protocols, not platforms approach to these things is that it's way more resilient. If you're using Messenger and it's down, but can easily swap in a different tool and continue to communicate that's a much better, more resilient solution than relying on Facebook not to mess up. And that's on top of all the other benefits I laid out in my paper.In fact, a protocols approach also creates more incentives for better uptime from services, since continually screwing up for extended periods of times doesn't just mean losing ad revenue for a few hours, but it is much more likely to lead people to permanently switch to an alternative provider.Indeed, a key part of the value of the internet, originally, was in its resiliency of being highly distributed, rather than centralized, and how it could continue to work well if one part fell off the network. The increasing centralization/silo-ization of the internet has taken away much of that benefit. So, if anything, yesterday's mess should be seen as another reason to look more closely at a protocols-based approach to building new internet services.
OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last
In August, OnlyFans made the stunning announcement that it planned to ban sexually explicit content from its service. The site, which allows creators to post exclusive content and interact directly with subscribers, made its name as a host for sexually-oriented content. For a profitable website to announce a ban of the very content that helped establish it was surprising and dismaying to the sex workers and other creators who make a living on the site.OnlyFans is hardly the first site to face financial pressure related to the content it publishes. Advertiser pressure has been a hallmark of the publishing industry, whether in shaping what news is reported and published, or withdrawing support when a television series breaks new societal ground.Publishers across different kinds of media have historically been vulnerable to the demands of their financial supporters when it comes to restricting the kinds of media they distribute. And, with online advertising now accounting for the majority of total advertising spending in the U.S., we have seen advertisers recognize their power to influence how major social media sites moderate, the organization of campaigns like Stop Hate for Profit, or the development of “brand safety” standards for acceptable content.But OnlyFans wasn’t bowing to advertiser demands; instead, it says it faced an even more fundamental kind of pressure coming from its financial intermediaries. OnlyFans explained in a statement that it planned to ban explicit content “to comply with the requests of our banking partners and payout providers.”Financial intermediaries are key actors in the online content hosting ecosystem. The websites and apps that host people’s speech depend on banks, credit card companies, and payment processors to do everything from buying domain names and renting server space to paying their engineers and content moderators. Financial intermediaries are also essential for receiving payments from advertisers and ad networks, processing purchases, and enabling user subscriptions. Losing access to a bank account, or getting dropped by a payment processor, can make it impossible for a site to make money or pay its debts, and can result in the site getting knocked offline completely.This makes financial intermediaries obvious leverage points for censorship, including through government pressure. Government officials may target financial intermediaries with threats of legal action or reputational harm, as a way of pursuing censorship of speech that they cannot actually punish under the law.In 2010, for example, U.S Congressmen Joe Lieberman and Peter King reportedly pressured MasterCard in private to stop processing payments for Wikileaks; this came alongside a very public campaign of censure that Lieberman was conducting against the site. Ultimately, Wikileaks lost its access to so many banks, credit card companies, and payment processors that it had to temporarily suspend its operations; it now accepts donations through various cryptocurrencies or via donations made to the Wau Holland Foundation (which has led to pressure on the Foundation in turn).Credit card companies were also the target of the 2015 campaign by Sheriff Tom Dart to shutter Backpage.com. Dart had previously pursued charges against another classified-ads site, Craigslist, for solicitation of prostitution, based on the content of some ads posted by users, and had been told unequivocally by a district court that Section 230 barred such a prosecution.In pursuing Backpage for similar concerns about enabling prostitution, Dart took a different tack: He sent letters to Visa and MasterCard demanding that they “cease and desist” their business relationships with Backpage, implying that the companies could face civil and criminal charges. Dart also threatened to hold a damning press conference if the credit card companies did not sever their ties with the website.The credit card companies complied, and terminated services to Backpage. Backpage challenged Dart’s acts as unconstitutional government coercion and censorship in violation of the First Amendment. (CDT, EFF, and the Association for Alternative Newsmedia filed an amicus brief in support of Backpage’s First Amendment arguments in that case.) The Seventh Circuit agreed and ordered Dart to cease his unconstitutional pressure campaign.But this did not result in a return to the status quo, as the credit card companies declined to restore service to Backpage, showing how long-lasting the effects of such pressure can be. Backpage is now offline—but not because of Dart—the federal government seized the site as part of its prosecution of several Backpage executives, which was declared a mistrial earlier this month.Since that time, the pressures on payment processors and other financial intermediaries have only increased. FOSTA-SESTA, for example, created a vague new federal crime of “facilitation of prostitution” that has rendered many intermediaries uncertain about whether they face legal risk in association with content related to sex work. After Congress passed FOSTA in 2018, Reddit and Craigslist shuttered portions of their sites, multiple sites devoted to harm reduction went offline, and sites like Instagram, Patreon, Tumblr, and Twitch have taken increasingly strict stances against nudity and sexual content.So while advertisers may be largely motivated by commercial concerns and brand reputation, financial intermediaries such as banks and payment processors are also driven by concerns over legal risk when they try to limit what kinds of speech and speakers are accessible online.Financial institutions, in general, are highly regulated. Banks, for example, face obligations such as the “Customer Due Diligence” rule in the US, which requires them to verify the identity of account holders and develop a risk profile of their business. Concerns over legal risk can cause financial intermediaries to employ ham-handed automated screening techniques that lead to absurd outcomes, such as when Paypal canceled the account of News Media Canada in 2017 for promoting the story “Syrian Family Adopts To New Life”, or when Venmo (which is owned by PayPal) reportedly blocked donations to the Palestine Children’s Relief Fund in May 2021.As pressures relating to online content and UGC-related businesses grow, some financial intermediaries are taking a more systemic approach to evaluating the risk that certain kinds of content pose to their own businesses. In this, financial intermediaries are mirroring a trend seen in content regulation debates more generally, on both sides of the Atlantic.MasterCard, for example, in April announced changes to its policy for processing payments related to adult entertainment. Starting October 15, MasterCard will require that banks connecting merchants to the MasterCard network certify that those merchants have processes in place to maintain age and consent documentation for the participants in sexually explicit content, along with specific “content control measures.”These include pre-publication review of content and a complaint procedure that can address reports of illegal or nonconsensual content within seven days, including a process by which people depicted in the content can request its removal (which MasterCard confusingly calls an “appeals” process). In other words, MasterCard is using its position as the second largest credit card network in the US to require banks to vet website operators’ content moderation processes—and potentially re-shaping the online adult content industry at the same time.Financial intermediaries are integral to online content creation and hosting, and their actions to censor specific content or enact PACT Act-style systemic oversight of content moderation processes should bring greater scrutiny on their role in the online speech ecosystem.As discussed above, these intermediaries are an attractive target for government actors seeking to censor surreptitiously and extralegally, and they may feel compelled to act cautiously if their legal obligations and potential liability are not clear. (For the history of this issue in the copyright and trademark field, see Annemarie Bridy’s 2015 article, Internet Payment Blockades.) Moreover, financial intermediaries are often several steps removed from the speech at issue and may not have a direct relationship with the speaker, which can make them even less likely to defend users’ speech interests when faced with legal or reputational risk.As is the case throughout the stack, we need more information from financial intermediaries about how they are exercising discretion over others’ speech. CDT joined EFF and twenty other human rights organizations in a recent letter to PayPal and Venmo, calling on those payment processors to publish regular transparency reports that disclose government demands for user data and account closures, as well as the companies’ own Terms of Service enforcement actions against account holders.Account holders also need to receive meaningful notice when their accounts are closed and provided the opportunity to appeal those decisions—something notably missing from MasterCard’s guidelines for what banks should require of website operators.Ultimately, OnlyFans reversed course on its porn ban and announced that they had “secured assurances necessary to support [their] diverse creator community” (It’s not clear if those assurances came from existing payment processors or if OnlyFans has found new financial intermediaries). But as payment processors, banks, and credit card companies continue to confront questions about their role in enabling access to speech online, they should learn from other intermediaries’ experience: once an intermediary starts making judgments about what lawful speech it will and won’t support, the demands on it to exercise that judgment only increase, and the scale of human behavior and expression enabled by the Internet is unimaginably huge. The ratchet of content moderation expectations only turns one way.Emma Llansó is the Director of CDT's Free Expression Project, where she works to promote law and policy that support Internet users' free expression rights in the United States, Europe, and around the world.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
Techdirt Podcast Episode 300: How Our Views Have Changed Over 300 Episodes
Last week, we celebrated 300 episodes of the Techdirt Podcast with a live stream, for which we brought back original co-hosts Dennis Yang and Hersh Reddy. You can watch the stream on YouTube, but now it's time to release the episode as normal! The subject was simple, but led the conversation in all kinds of interesting directions: how have our views on technology issues changed and evolved since the podcast started?Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Investigation: CBP Targeted Journalists, Illegally Shared Info With Mexico, And Attempted To Cover It All Up
A couple of years ago, documents surfaced that showed the CBP was placing journalists, activists, and immigration lawyers on some form of a watchlist, which would allow agents and officers to subject these targets to additional scrutiny when they crossed the border. There were obvious civil liberties implications, ones the CBP seemed largely unconcerned about.The targeting appeared to be related to the "migrant caravan" that reached the border late in 2018 and performed a mass "incursion" on January 1, 2019. The CBP claimed it had only targeted those people because they had been involved in "violence" near the border late in 2018. It refused to explain what it meant by the word "involved" or how that was enough to ignore First Amendment protections. Nor did it explain why it was deliberately targeting US citizens not suspected to have been involved in any criminal activity. It also did not explain why it shared information on these targets with the government of Mexico, which then assisted in spying on this group of lawyers, journalists, and activists.The DHS Inspector General opened an investigation [PDF] of these actions. And it has arrived at the conclusion that this all looks pretty bad, but wasn't actually illegal. Read into that what you will.
A New Hope For Moderation And Its Discontents?
In his post kicking off this series, Mike notes that, “the biggest concern with moving moderation decisions down the stack is that most infrastructure players only have a sledge hammer to deal with these questions, rather than a scalpel.” And, I agree with Jonathan Zittrain and other contributors that governments, activists, and others will increasingly reach down the stack to push for takedowns—and will probably get them.So, should we expect more blunt force infra layer takedowns or will infrastructure companies invest in more precise moderation tools? Which one is even worse?Given the choice to build infrastructure now, would you start with a scalpel? How about many scalpels? Or maybe something less severe but distributed and transparent, like clear plastic spoons everywhere! Will the moderation hurt less if we’re all in it together? With the distributed web, we may get to ask all these questions, and have a chance to make things better (or worse). How?Let me backup a moment for some mostly accurate natural history. In the 90s, to vastly oversimplify, there was web 1.0: static, server-side pages that arose, more manual than you'd like sometimes, maybe not so easy to search or monetize at scale, but fundamentally decentralized and open. We had webrings and manually curated search lists. Listening to Nirvana in my dorm room I read John Perry Barlow’s announcement that "We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different," in a green IRC window and believed.Ok, not every feature was that simple or open or decentralized. The specter of content moderation haunted the Internet from early days of email and bulletin boards. In 1978, a marketer for DEC sent out the first unsolicited commercial message on ARPANET and a few hundred people told him to knock it off, Gary! Voila, community moderation.Service providers like AOL and Prodigy offered portals through which users accessed the web and associated chat rooms, and the need to protect the brand led to predictable interventions. There's a Rosetta Stone of AOL content moderation guidelines from 1994 floating around to remind us that as long as there have been people expressing themselves online, there have been other people doing their best to create workable rule sets to govern that expression and endlessly failing in comic and tragic ways (“‘F--- you’ is vulgar” but ‘my *** hurts’ is ok”).Back in the Lascaux Cave there was probably someone identifying naughty animal parts and sneaking over with charcoal to darken them out, for the community, and storytellers who blamed all the community’s ills on that person.And then after the new millenium, little by little and then all at once, came Web 2.0—the Social Web. Javascript frameworks, personalization, everyone a creator and consumer within (not really that open) structures we now call "Platforms" (arguably even less open when using their proprietary mobile rather than web applications). It became much easier for anyone to create, connect, communicate, and distribute expression online without having to design or host their own pages. We got more efficient at tracking and ad targeting and using those algorithms to serve you things similar to the other things you liked.We all started saying a lot of stuff and never really stopped. If you're a fan of expression in general, and especially of people who previously didn't have great access to distribution channels expressing themselves more, that's a win. But let's be honest: 500 million tweets a day? We've been on an expression bender for years. And that means companies spending billions, and tens of thousands of enablers—paid and unpaid—supporting our speech bender. Are people happy with the moderation we're getting? Generally not. Try running a platform. The moderation is terrible and the portions are so large!Who’s asking for moderation? Virtually everyone in different ways. Governments want illegal content (CSAM, terrorist content) restricted on behalf of the people, and some also want harmful but legal content restricted in ways that are still unclear, also for the people. Many want harmful content restricted, which means different things depending on which people, which place, which culture, which content, which coffee roast you had this morning. Civil society groups generally want content restricted related to their areas of expertise and concern (except EFF, who will party like it's 1999 forever I hope).There are lots of types of expression where at least some people think moderation is appropriate, for different reasons; misinformation is different from doxxing is different from harassment is different from copyright infringement is different from spam. Often, the same team deals with election protection and kids eating Tide Pods (and does both surprisingly well, considering). There’s a lot to moderate and lots of mutually inconsistent demand to do it coming from every direction.Ok, so let’s make a better internet! Web 3 is happening and it is good. More specifically, as Chris Dixon recently put it, “We are now at the beginning of the Web 3 era, which combines the decentralized, community-governed ethos of Web 1 with the advanced, modern functionality of Web 2.” Don’t forget the blockchain. Assume that over the next few years, Web 3 infrastructure gets built out and flourishes—projects like Arweave, Filecoin, Polkadot, Sia, and Storj. And applications eventually proliferate; tools for expression, creativity, communication, all the things humans do online, all built in ways that embody the values of the DWeb.But wait, the social web experiment of the past 15 years led us to build multi-billion dollar institutions within companies aimed at mitigating harms (to individuals, groups, societies, cultural values) associated with online expression and conduct, and increasingly, complying with new regulations. Private courts. Private Supreme Courts. Teams for safeguarding public health and democratic elections. Tens of thousands poring over photos of nipples, asking, where do we draw the line? Are we going to do all that again? One tempting answer is, let’s not. Let’s fire all the moderators. What’s the worst that could happen?Another way of asking this question is -- what do we mean when we talk about “censorship resistant” distributed technologies? This has been an element of the DWeb since early days but it’s not very clear (to me at least) how resistant, which censorship, and in what ways.My hunch is that censorship resistance—in the purest sense of defaulting to immutable content with no possible later interventions affecting its availability—is probably not realistic in light of how people and governments currently respond to Web 2.0. The likely outcome is probably quick escalation to intense conflict with the majority of governments.And even for people who still favor a marketplace-of-ideas-grounded “rights” framework, I think they know better than to argue that the cure for CSAM is more speech. There will either have to be ways of intervening or the DWeb is going to be a bumpy ride. But “censorship resistant” in the sense of, “how do we build a system where it is not governments, or a small number of powerful, centralized companies that control the levers at the important choke points for expression?” Now we’re talking. Or as Paul Frazee from Beaker Brower and other distributed projects put it: “The question isn't ‘how do we make moderation impossible?’ The question is, how do we make moderation trustworthy?”So, when it comes to expression and by extension content moderation, how exactly are we going to do better? What could content moderation look like if done consistent with the spirit, principles, and architecture of Web 3? What principles can we look to as a guide?I think the broad principles will come as no surprise to anyone following this space over the past few years (and are not so different from those outlined in Corynne McSherry’s post). They include notice, transparency, due process, the availability of multiple venues for expression, and robust competition between options on many axes—including privacy and community norms, as well as the ability of users to structure their own experience as much as possible.Here are some recurring themes:
Daily Deal: The JavaScript DOM Game Developer Bundle
The JavaScript DOM Game Developer Bundle has 8 courses to help you master coding fundamentals. Courses cover JavaScript DOM, Coding, HTML 5 Canvas, and more. You'll learn how to create your own fun, interactive games. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Rethinking Facebook: We Need To Make Sure That 'Good For The World' Is More Important Than 'Good For Facebook'
I'm sure by now most of you have either seen or read about Facebook whistleblower Frances Haugen's appearance on 60 Minutes discussing in detail the many problems she saw within Facebook. I'm always a little skeptical about 60 Minutes these days, as the show has an unfortunately long history of misrepresenting things about the internet, and similarly a single person's claims about what's happening within a company are not always the most accurate. That said, what Haugen does have to say is still kind of eye opening, and certainly concerning.The key takeaway that many seem to be highlighting from the interview is Haugen noting that Facebook knows damn well that making the site better for users will make Facebook less money.
Company That Handles Billions Of Text Messages Quietly Admits It Was Hacked Years Ago
We've noted for a long time that the wireless industry is prone to being fairly lax on security and consumer privacy. One example is the recent rabbit hole of a scandal related to the industry's treatment of user location data, which carriers have long sold to a wide array of middlemen without much thought as to how this data could be (and routinely is) abused. Another example is the industry's refusal to address the longstanding flaws in Signaling System 7 (SS7, or Common Channel Signaling System 7 in the US), a series of protocols hackers can exploit to track user location, dodge encryption, and even record private conversations.Now this week, a wireless industry middleman that handles billions of texts every year has acknowledged its security isn't much to write home about either. A company by the name of Syniverse revealed that it was the target of a major attack in a September SEC filing, first noted by Motherboard. The filing reveals that an "individual or organization" gained unauthorized access to the company's databases "on several occasions." That in turn provided the intruder repeated access to the company's Electronic Data Transfer (EDT) environment compromising 235 of its corporate telecom clients.The scope of the potentially revealed data is, well, massive:
Hacked Data Exposes Law Enforcement Officers Who Joined Far-Right Oath Keepers Group
Some more unsettling news about law enforcement's close relationship to (or at least professional tolerance of) far-right groups linked to the January 6th raid of the Capitol building has come to light, thanks to transparency activists Distributed Denial of Secrets.Email accounts linked to several key members of the Oath Keepers -- four of whom are currently facing charges for their participation in the attack on the Capitol -- have been hacked, exposing communications between the Oath Keepers and law enforcement officers seeking to join the group.
Disney Defeats Lawsuit Brought By Company Owning Evel Knievel's Rights Over 'Toy Story 4' Character
Roughly a year ago, we discussed a lawsuit brought by K&K Promotions, the company that holds the trademark and publicity rights for the now-deceased stuntman Evel Knievel, against Disney. At issue was a character in Toy Story 4 named Duke Caboom, a toy version of a motorcycle stuntman that certainly had elements of homage to Knievel. But not just Knievel, which is important. Instead, a la several lawsuits Rockstar Games has faced over characters appearing in the Grand Theft Auto series, Caboom was an amalgam of retro-era stuntmen, not a faithful depiction of any one of them, including Knievel. And, while some who worked on the film even mentioned that Knievel was one of the inspiration points for the character, they also noted that Knievel's routine, garb, and mannerisms were hardly unique for stuntmen in that era. Despite that, K&K insisted that Caboom was a clear ripoff and appropriation of Knievel.Well, Disney moved to dismiss the case, claiming essentially the above: Duke Caboom is based on a compilation of retro-era stuntmen. And the court has now ruled, siding with Disney and dismissing the case.
Tesla 'Self-Driving' NDA Hopes To Hide The Reality Of An Unfinished Product
There isn't a day that goes by where Tesla hasn't found itself in the news for all the wrong reasons. Like last week, when Texas police sued Tesla because one of the company's vehicles going 70 miles per hour in self-driving mode failed to function properly, injuring five officers.
Reminder: Our Techdirt Tech Policy Greenhouse Live Workshop Is Happening This Wednesday!
Over the last few weeks we've been running pieces for our latest Techdirt Greenhouse discussion on questions around content moderation at the infrastructure layer. This time we're also doing a live workshop event to go with it, in which some of the authors of the pieces will present, leading into "table discussions" from attendees to explore some of the tradeoffs and challenges regarding content moderation. This will be happening this Wednesday, October 6th, from 9am PT to 12pm PT. If you're interested in taking part, please register to attend.We look forward to seeing you there!
Right-Wing Commentator Dan Bongino Runs Into Florida Anti-SLAPP Law, Now Owes Daily Beast $32,000 In Legal Fees
Venue selection matters, as right-wing political commentator/defamation lawsuit loser Dan Bongino is now discovering. He sued the Daily Beast over an article about his apparent expulsion from the National Rifle Association's video channel, NRATV. After trying (and failing) to get a comment from Bongino about this ouster, reporter Lachlan Markay published his article, updating it later when Bongino decided he did actually want to talk about it.
Infrastructure And Content Moderation: Challenges And Opportunities
The signs were clear right from the start: at some point, content moderation would inevitably move beyond user-generated platforms down to the infrastructure—the place where services operate the heavy machinery of the Internet and without which user-facing services cannot function. Ever since the often-forgotten incident when Amazon stopped hosting Wikileaks after US political pressure took place in 2010, there has been a steady uneasiness regarding the role infrastructure providers could end up playing in the future of content moderation.A glimpse of what this would look like came in 2017, when companies like Clouldflare and GoDaddy took affirmative action against content they considered problematic for their business models, in this case white supremacist websites that had been the subject of massive public opprobrium. Since then, that future has become the present reality as the list of infrastructure companies performing content moderation functions keeps growing.Content moderation has two inherent qualities that provide important context.First, content moderation is generally complex in real-world process design and implementation. There are a host of conflicting rights, diverse procedural norms and competing interests that come into play every time content is posted on the Internet; each case is unique and on some level so it should be treated.Second, content moderation is messy because the world is messy: the global nature of the Internet, economies of scale, societal realities and cultural differences create a multi-layered set of considerations that are difficult to reconcile.The bright spot in all this messiness and complexity is the hope of due process and the rule of law. The theory is that, in healthy and competitive markets, users have choice and therefore it becomes more difficult for any mistakes to scale. So, if a user’s post gets deleted on one platform, the user should have the option of posting it someplace else.Of course, such markets are difficult to accomplish and the current Internet market is certainly not in this category. But, the point here is that it is one thing to have one of your postings removed from Facebook and it is another to go completely offline if Cloudflare stops providing you their services. The stakes are completely different.For a long time, infrastructure providers were smart enough to stay out of the content business. The argument was that the actors who are responsible for the pipes of the Internet should not concern themselves with the kind of water that runs through them. Their agnosticism was encouraged because their main focus was to provide other services, including security, network reliability and performance.However, as the Internet evolved, so did the infrastructure providers’ relationship with content.In the early days of content moderation, what constituted infrastructure was more discernible and structured. People would usually refer to the Open System Interconnection (OSI) model as a useful analogy, especially with policy makers who were trying to identify the role and responsibilities various companies held in the Internet ecosystem.The Internet of today, however, is very different from those days. The layers of the Internet are not distinguishable any longer and, in many cases, participating actors are not just operating at the infrastructure or the application layers. At the same time, and as applications in the Internet were gaining in popularity and use, innovation started moving upstream.“Infrastructure” is now being nested on top of other “infrastructure” all within just layer 7 of the OSI stack. Things are not as clear-cut.This indicates that, in some ways, we should not be surprised that the content moderation conversations seem to gradually be moving downstream. A cloud provider that provides support to a host of different websites, platforms, news outlets or businesses, will inevitably have to deal with issues of content.A content delivery network (CDN) will unquestionably face, at some point, the moral dilemma of providing its services to businesses that walk a tightrope with harmful or even illegal content. It really comes down to a simple equation: if user-generated platforms don’t do their job, infrastructure providers will have to do it for them. And, they do. Increasingly often.If this is the reality, the question becomes what is the best way for infrastructure providers to do moderation considering current practices of content moderation, the significant chilling effects, and the often-missed trade-offs.If we are to follow the “framework, tools, principles” triad, we should be mindful to not reinvent any existing ecosystem. Content moderation is not new and, over the years, a combination of laws and self-regulatory norms ensures a relatively consistent, predictable and stable environment—at least most of the time.Section 230 of the CDA in the US, the eCommerce Directive in Europe, Marco Civil in Brazil and other laws around the world have succeeded in creating a space where users and businesses could manage their affairs effectively and know that judicial authorities would treat their cases equally.For content moderation at the infrastructure level, a framework based on certainty and consistency is even more of a priority. Legal theory instructs that lack of consistency can diminish the development of norms or it can undermine the way existing ones can manifest themselves. In a similar vein, lack of certainty means the inability to get organized in such a way that complies with the law. For infrastructure providers that support basic and day-to-day functions of the Internet, such a framework becomes indispensable.I often say that the Internet is not a monolith. This is not only to demonstrate how the Internet was never meant to perform one single thing, but also to show the importance of designing a legal framework that behaves the same. When we talk about predictability and certainty, we must be conscious of putting in place requirements of clarity, stability and intelligibility so that participating actors can make calculated and informed decisions about the legal consequences of their actions. That’s what made Section 230 a success for more than two decades.Frameworks without appropriate tools to implement and assess them, however, mean little. Tools are important as they can help maximize the benefits of processes, ultimately increasing flexibility, reducing complexity, and ensuring clarity. Content moderation has consistently been suffering from lack of tools that could clearly exhibit the effects of moderation. Think, for instance, all these times content is taken down and there is no way to say what the true effect is on free speech and on users.In this context, we need to think of tools as things that would allow us to better understand the scale and chilling effect that content moderation in the infrastructure causes. Here is what I wrote about this last year:
There May Be A New Boss At The DOJ, But The Government Still Loves Its Indefinite Gag Orders
Despite the DOJ recently drawing heat for its targeting of journalists during internal leak investigations, a lot still hasn't changed about the way demands for data are handled by the feds. Over the past couple of decades, the DOJ and its components have been asking for and obtaining data from service providers, utilizing subpoenas and National Security Letters that come with indefinite gag orders attached.These orders swear recipients like Microsoft and Google to secrecy, forbidding them from notifying targeted customers and users. (Even Techdirt has been hit with one.) Unlike regular search warrants, where the target is made aware of the rummaging by the physical presence of law enforcement officers, warrants, subpoenas, and NSLs allow the government to go about its rummaging unnoticed.Reforms to surveillance powers by the USA Freedom Act have at least forced the government to perform periodic reviews of ongoing gag orders. It has also given companies a way to challenge gag orders and demands for data, but that's only useful if the companies have some idea who is being targeted. As this report on the ongoing abuse of gag orders by Jay Greene and Drew Harwell for the Washington Post points out, it's not always clear who the government is seeking information about. (Alternative link here.)
Daily Deal: TREBLAB Z2 Bluetooth 5.0 Noise-Cancelling Headphones
The Z2 headphones earned their name because they feature twice the sound, twice the battery life, and twice the convenience of competing headphones. This updated version of the original Z2s comes with a new all-black design and Bluetooth 5.0. Packed with TREBLAB's most advanced Sound2.0 technology with aptX and T-Quiet active noise-cancellation, these headphones deliver goose bump-inducing audio while drowning out unwanted background noise. These headphones are on sale for $79.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In Josh Hawley's World, People Should Be Able To Sue Facebook Both For Taking Down Stuff They Don't Like AND Leaving Up Stuff They Don't Like
Last year, Josh Hawley introduced one of his many, many pathetic attempts at changing Section 230. That bill, the "Limiting Section 230 Immunity to Good Samaritans Act" would create a private right of action allowing individuals to sue any social media company if they were unhappy that some of their content was removed, and to seek a payout. The obvious implication, as with a ton of bad faith claims by populists who pretend to be "conservative" is that websites shouldn't do any moderation at all.However, this week, Hawley introduced another bill to attack Facebook and to create another private right of action against basically any website -- except this time the private right of action is for anyone who feels their "mental health" was harmed by content on that website. Contrary to what Hawley-loving propagandist rag "The Daily Caller" falsely claims, this bill doesn't actually "amend" Section 230, it simply uses the definition of an interactive computer service from 230, and introduces a weird new liability regime that is in total conflict with 230 (and with Hawley's previous bill -- but when you're culture warrioring and trying to be the face of the new insurrectionists, who has time for little things like consistency?). The Federal Big Tech Tort Act is a bunch of silly performative nonsense.It used to be that Republicans were the party that was dead set against opening up new private rights of action and giving tort lawyers new ways to drag people and companies into court. No longer, I guess. Amusingly, Hawley's bill shares its DNA with Senator Amy Klobuchar's equally silly bill to hold social media companies liable for misinformation. The key part in the Hawley bill:
South Korean ISP Somehow Thinks Netflix Owes It Money Because Squid Game Is Popular
We've noted for a while how the world's telecom executives have a fairly entrenched entitlement mindset. As in, they often tend to jealously eye streaming and online ad revenues and assume they're inherently owed a cut of those revenues just because at some point they traveled on their networks. You saw this hubris at play during AT&T's claims that "big tech" gets a "free ride" on their networks, which insisted that companies like Google should pay them significant, additional troll tolls "just because" (which triggered the entire net neutrality fight in the States).AT&T pretty solidly established this entitlement mindset domestically, and I've watched it slowly exported overseas. Like this week in South Korea, where South Korean broadband provider SK Broadband sued Netflix simply because its new TV show, Squid Game, is popular. Basically, the lawsuit argues, because the show is so popular and is driving a surge in bandwidth consumption among South Koreans watching it, Netflix is somehow obligated to pay the ISP more money:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is sumgai with a comment about the disastrous new bill regulating online commerce:
This Week In Techdirt History: September 26th - October 2nd
Five Years AgoThis week in 2016, we looked at how the internet of things was fueling an unprecedented rise in DDoS attacks, while the DHS was offering its unsolicitied (and likely unhelpful) assistance in securing it, and we also learned more about the likely reason for the NSA's trove of hacking tools being discovered and published. The CFAA emerged at the center of a political dispute, the California Supreme Court agreed to hear an important Section 230 case, and the DOJ decided that copyright infringement could be grounds for deportation, while the RIAA was going around acting as though SOPA had passed, even though it didn't. Also, in an extremely silly move, four state AGs filed a lawsuit to block the IANA transition, which was quickly tossed out by a judge.Ten Years AgoThis week in 2011, the Senate let the copyright lobby set up shop in the Senate building during the PROTECT IP debate, while the House version of the bill added in a provision covering cyberlockers, an "analyst" from Disney was cheerleading for the bill. Canadian politicians were pushing for their own terrible copyright reform law, while we looked at how the EU's copyright extension was harming classical music. Multiple countries were getting ready to sign ACTA on the weekend, until it turned out that some weren't actually going to do it, even though the US planned to use its signing statement to defend the unconstitutional aspects of the agreement. Meanwhile, Righthaven suffered another huge loss, and continued trying to avoid paying legal fees, though it only succeeded in getting a brief reprieve.Fifteen Years AgoThis week in 2006, the fight between Google and European newspapers continued with the papers trying to reinvent robots.txt, new companies were trying to find a way to charge money for social media, and we wondered if it was possible to see the actual FCC data on broadband penetration. Microsoft was going after the anonymous person who cracked their copy protection system, the MPAA was touting its bizarre use of DVD-sniffing dogs, and Hollywood was raising the stakes in its claims of the damages from piracy. Meanwhile, a judge sadly agreed with the RIAA that Morpheus had induced infringement, while Limewire was hitting back hard against the RIAA with a lawsuit alleging antitrust and consumer fraud.
PS4 Battery Time-Keeping Time-Bomb Silently Patched By Sony; PS3 Consoles Still Waiting
Over the past several months, there have been a couple of stories that certainly had owners of Sony PlayStation 4 and PlayStation 3 consoles completely wigging out. First came Sony's announcement that it was going to shut down support for the PlayStation Store and PlayStation Network on those two consoles. This briefly freaked everyone out, the thinking being that digitally purchased games would be disappeared. Sony confirmed that wouldn't be the case, but there was still the question of game and art preservation, given that no new purchases would be allowed and that in-game purchases and DLC wouldn't be spared for those who bought them. As a result of the outcry, Sony reversed course for both consoles specifically for access to the PlayStation Store, nullifying the debate. Except that immediately afterward came word of an issue with the PS3 and PS4 console batteries and the way they check in with the PlayStation Network (PSN) to allow users to play digital or physical game media. With the PSN still sunsetting on those consoles, the batteries wouldn't be able to check in, and would essentially render the console and all the games users had worthless and unplayable.But now that too has been corrected by Sony, albeit in a completely unannounced fashion.
Top Publishers Aim To Own The Entire Academic Research Publishing Stack; Here's How To Stop That Happening
Techdirt's coverage of open access -- the idea that the fruits of publicly-funded scholarship should be freely available to all -- shows that the results so far have been mixed. On the one hand, many journals have moved to an open access model. On the other, the overall subscription costs for academic institutions have not gone down, and neither have the excessive profit margins of academic publishers. Despite that success in fending off this attempt to re-invent the way academic work is disseminated, publishers want more. In particular, they want more money and more power. In an important new paper, a group of researchers warn that companies now aim to own the entire academic publishing stack:
Tampa Bay PD's 'Crime-Free Housing' Program Disproportionately Targeted Black Residents, Did Nothing To Reduce Crime
It looks like landlords in Florida want to get back to things that made this country great: bigotry, segregation, and oppression. And look who's willing to pitch in! Why, it's that old friend of racists, local law enforcement. (h/t WarOnPrivacy)
...148149150151152153154155156157...