Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-20 05:46
Daily Deal: Interactive Learn to Code Bundle
The Interactive Learn to Code Bundle has 9 courses designed to help you learn to code and to write programs. The courses cover SQL, JavaScript, jQuery, PHP, Python, Bootstrap, Java, and web design. Each concept is explained in-depth, and uses simple tasks to help you cement your newly gained knowledge with some hands-on experience. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The TikTok Oracle Grift: Insiders Admit They Went Hunting For A Tech Company The President Liked
Earlier this week we wrote about the absolute grift involved in the TikTok / Oracle deal. Contrary to the framing that this was Oracle "buying" TikTok to satisfy the President's unconstitutional demand that the Chinese company ByteDance sell TikTok to an American company, the story showed that this was just a hosting deal for Oracle's cloud service, which is way down the list of top cloud providers.The end result was no actual sale (though the Treasury Department is still "reviewing" the deal), but a big contract for Oracle, and a bogus story in which the President can pretend he forced ByteDance to "sell" TikTok, even though it retains ownership in the company (there are some rumors that the hosting deal will include a small, and probably symbolic, equity stake for Oracle).The other key point I noted in my article was that Oracle's executive leadership, starting with Larry Ellison, but including CEO Safra Catz, have been cozying up to Trump and the White House ever since Trump became President. While much of Silicon Valley's executive teams have made it quite clear how uncomfortable they are with a Trump Presidency, Oracle... has done the opposite. And while I framed it as being convenient that things worked out this way, a report from the Wall Street Journal highlights how this was the grift from day one.
Josh Hawley Isn't 'Helping' When It Comes To TikTok
It's the dumb saga that only seems to get dumber. Earlier this week, we noted that Trump's dumb and arguably unconstitutional order banning TikTok had resulted in (surprise) Trump friend and Oracle boss Larry Ellison nabbing a cozy little partnership for his fledgling cloud hosting business. Granted the deal itself does absolutely nothing outside of providing Oracle a major client. It's more cronyism and heist than serious adult policy, yet countless outlets still somehow framed the entire thing as somehow meaningful, ethical, and based in good faith (it's none of those things).Senator Josh Hawley, one of the biggest TikTok pearl clutchers in Congress, obviously didn't much like the deal. Hawley sent an open letter to Treasury Secretary Steve Mnuchin calling the deal "completely unacceptable" and demanding an outright ban:
Copyright Companies Want Memes That Are Legal In The EU Blocked Because They Now Admit Upload Filters Are 'Practically Unworkable'
The passage of the EU Copyright Directive last year represented one of the most disgraceful examples of successful lobbying and lying by the publishing, music, and film industries. In order to convince MEPs to vote for the highly controversial legislation, copyright companies and their political allies insisted repeatedly that the upload filters needed to implement Article 17 (originally Article 13) were optional, and that user rights would of course be respected online. But as Techdirt and many others warned at the time, this was untrue, as even the law's supporters admitted once it had been passed. Now that the EU member states are starting to implement the Directive, it is clear that there is no alternative to upload filters, and that freedom of speech will therefore be massively harmed by the new law. France has even gone so far as ignore the requirement for the few user protections that the Copyright Directive graciously provides.The EU Copyright Directive represents an almost total victory for copyright maximalists, and a huge defeat for ordinary users of the Internet in the EU. But if there is one thing that we can be sure of, it's that the copyright industries are never satisfied. Despite the massive gains already enshrined in the Directive, a group of industry organizations from the world of publishing, music, cinema and broadcasting have written to the EU Commissioner responsible for the Internal Market, Thierry Breton, expressing their "serious concerns regarding the European Commission's consultation on its proposed guidance on the application of Article 17 of the Directive on Copyright in the Digital Single Market ("the Directive")." The industry groups are worried that implementation of the EU Copyright Directive will provide them with too little protection (pdf):
How Not To Be A School District Superintendent: The Elmhurst, IL Edition
It should serve as no surprise that school district superintendents are not somehow universally amazing people. Like any population, there will be good ones and bad ones. All of that being said, it seems that the COVID-19 pandemic has been particularly good at highlighting just how bad at the job, not to mention at public relations, some superintendents can be. The most useful example of this came from Georgia, where a school district suspended, then un-suspended, students for posting pictures of just how badly their schools were failing at managing bringing students back during the pandemic.But a more recent example comes to us from -- checks notes -- huh, my hometown of Elmhurst, Illinois. Dave Moyer, the superintendent for the Elmhurst public schools, kicked up a local shit-storm for himself a couple of weeks ago when he decided to have an exchange with a revered teacher in his district over the use of masks by teachers.
Because Too Many People Still Don't Know Why The EARN IT Bill Is Terrible, Here's A Video
The biggest problem with all the proposals to reform Section 230 is that way too many people don't understand *why* they are a terrible idea. And the EARN IT bill is one of the worst of the worst, because it does not just break Section 230 but also so much more, yet too many people remain oblivious to the issues.Obviously there's more education to be done, and towards that end Stanford's Riana Pfefferkorn and I recently gave this presentation at the Crypto and Privacy Village at Defcon. The first part is a crash course in Section 230 and how it does the important work it does in protecting the online ecosystem. The second part is an articulation of all the reasons the EARN IT bill in particular is terrible and the specific damage it would do to encryption and civil liberties, along with ruining Section 230 and everything important that it advances.We'll keep explaining in every way we can why Section 230 should be preserved and the EARN IT bill should be repudiated, but if you're the kind of person who prefers AV explanations, then this video is for you.(Note: there's a glitch in the video at the beginning. Once it goes dark, skip ahead to about 3 minutes 20 seconds and it will continue.)
Ninth Circuit Appeals Court May Have Raised The Bar On Notifying Defendants About Secretive Surveillance Techniques
Recently -- perhaps far too recently -- the Ninth Circuit Appeals Court said the bulk phone records collection the NSA engaged in for years was most likely unconstitutional and definitely a violation of the laws authorizing it. The Appeals Court did not go so far as to declare it unconstitutional, finding that the records collected by the government had little bearing on the prosecution of a suspected terrorist. But it did declare it illegal.Unfortunately, the ruling didn't have much of an effect. The NSA had already abandoned the program, finding it mostly useless and almost impossible to comply with under the restrictions laid down by the USA Freedom Act. Rather than continually violate the new law, the NSA chose to shut it down, ending the bulk collection of phone metadata… at least under this authority.But there's something in the ruling that may have a much larger ripple effect. Orin Kerr noticed some language in the opinion that suggests the Ninth Circuit is establishing a new notification requirement for criminal prosecutions. For years, the government has all but ignored its duty to inform defendants of the use of FISA-derived evidence against them. The DOJ has considered FISA surveillance so secret and sensitive defendants can't even be told about it. Defendants fight blind, going up against parallel construction and ex parte submissions that keep them in the dark about how the government obtained its evidence.The language in the Ninth Circuit ruling changes that. It appears to suggest (but possibly not erect, unfortunately) an affirmative duty to inform defendants about surveillance techniques used by the government.
Banksy's Weakass Attempt To Abuse Trademark Law Flops, Following Bad Legal Advice
Nearly a year ago we wrote about the somewhat complex (and misunderstood by many) trademark dispute involving Banksy. There is a lot of background here, so I'm going to try to go with the abbreviated version. Banksy -- who has claimed that "copyright is for losers" -- has always refused to copyright his random graffiti-based art. However, as it now becomes clear, one reason he's avoided using copyright is because to register the work, he'd likely have to reveal his real name. Instead, it appears he's spent a few years abusing trademark law to try to trademark some of his artwork, including his famous "flower bomber" image, which was registered to a company called Pest Control Office Limited. Of course, to get a trademark, you have to use it in commerce, and many Banksy creations don't fit that criteria.Either way, a small UK print operation called Full Colour Black, had built a business selling postcards of various graffiti-based street art work -- using photographs that they themselves took. Whether or not that violates copyright or maybe other moral rights is, perhaps, an interesting question. But it wasn't one that was approached here. Instead, Full Colour Black simply (and quite reasonably) sought to get Banksy's (sorry, Pest Control Office Limited's) trademark on the flower bomber image canceled because it was clearly an invalid trademark, and the work was not being used in commerce by Banksy.You can argue that Full Colour Black profiting off of Banksy's work is unfair, but it's not trademark infringement. Banksy, somewhat bizarrely, ridiculously, and misleadingly, tried to frame the story as a big "greeting cards company" selling "fake Banksy merchandise," making it appear like Hallmark was ripping him off, rather than a tiny 3-person printing company that was trying to sell postcards of their own photographs of publicly-placed graffiti.From there, Banksy got even worse legal advice. After realizing that his own lack of use in commerce was going to be an issue, Banksy created a "pop-up shop" in London, called (admittedly, cleverly) Gross Domestic Product. The pop-up shop itself was a Banksy-kind of performance art in its own way. The store was loaded up, but was never planned to be opened. You could just look in the windows from the outside. Banksy did, however, set up a way to buy some products online.As we noted in our original post, despite claims that the pop-up shop was the right path to take from "arts lawyer and founder of the Design and Artists Copyright Society" Mark Stephens, who claimed he was giving "legal advice" to Banksy, the whole setup seemed much more likely to undermine his trademark claims, as it only underlined exactly how bogus the trademark claims were in the first place.And, now the EU Intellectual Property Office has weighed in and... Banksy's trademark has been shredded like his Girl with Balloon painting. And, you know what? The EUIPO points out exactly what we argued in our original post:
Daily Deal: The Deep Learning And Data Analysis Certification Bundle
The Deep Learning and Data Analysis Certification Bundle has 8 courses designed to introduce you to data analysis, visualization, statistics, and deep learning. Courses cover Google Data Studio, R-based deep learning packages such as H2O, artificial neural networks, regression analysis, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Denver Now Routing 911 Calls About Mental Health Issues Away From Cops, Towards Trained Health Professionals
Sending out armed law enforcement officers to handle mental health crises has often been a bad idea. Situations that require compassion, de-escalation, and nuance are far too often greeted with force, more force, and deadly force. Since there's always "excited delirium" to excuse the deaths caused by officers ill-equipped to deal with mental health issues, very little has changed. Until now.Recently, there has been a nationwide uprising against police brutality and the senseless killing of unarmed citizens by law enforcement officers. Legislators are actively pursuing reform efforts and finally suggesting some things cops just aren't trained to do well should be handled by others who can handle them better. Some police officials believe this is "defunding." But it isn't. It's just taking money being used badly and rerouting it to programs and personnel who are specifically trained to work with people suffering from mental health issues.A lot of city lawmakers are talking about shifting resources away from the "guys with guns" approach that has seen a great many people in need of health intervention "assisted" to death by police officers. The city of Denver is actually doing something about it. Denver's Support Team Assistance Response (STAR) -- launched four days after George Floyd-related protests began in Denver -- sends out health professionals and paramedics to respond to 911 calls about people behaving erratically.
Yet Another Study Shows U.S. 5G Over Promises, Under Delivers
It was the technology that was supposed to change the world. According to carriers, not only was fifth-generation wireless (5G) supposed to bring about the "fourth industrial revolution," it was supposed to revolutionize everything from smart cities to cancer treatment. According to conspiracy theorists and internet imbeciles, 5G is responsible for everything from Covid-19 to your migraines.Unfortunately for both sets of folks, data continues to indicate that 5G is nowhere near that interesting.A number of recent studies have already shown that U.S. wireless isn't just the most expensive in the developed world, U.S. 5G is notably slower than most overseas deployments. That's thanks in large part to our failure to make so-called middle band spectrum available for public use, resulting in a heavy smattering of lower band spectrum (good signal reach but slow speeds) or high-band and millimeter wave spectrum (great speeds, but poor reach and poor reception indoors). The end result is a far cry from what carriers had spent the last three years promising.PC Magazine was the latest to put carrier promises to the test and came away decidedly unimpressed. Networks certainly are getting faster, the report concludes, but it's largely due to steady evolutionary improvements being made to 4G networks, not newer 5G networks. As such, PC Magazine is forced to admit they bought into early carrier hype promising an amazing revolution:
Fight For The Future Wants To Help You Tell The FCC Where To Shove The NTIA's Anti-Section 230 Petition
We recently filed comments in the still ongoing FCC comment period regarding the NTIA's petition to get the FCC to reinterpret Section 230 to match with the President's bizarrely warped view of social media content moderation. I filed personal comments from my perspective running Techdirt, and we also filed more official comments as an organization. Both were filed during the initial comment period, but we're now in the middle of a second comment period -- officially for "responses" to the initial comments -- which are due by September 17th.It really is not particularly difficult to file a comment with the FCC, though if you do, I recommend that you write out a letter and submit a PDF that clearly states the issue and your argument (rather than just ranting incoherently) as many FCC commenters have been known to do.However, if you want it to be even easier, the good folks over at Fight for the Future have announced that they've set up a new site, SaveOnlineSpeech.org to make it even easier to file a comment.
Craft Brewing Trade Mag Argues Beer Is The Most IP Product Ever, Ignores History Of The Industry
And now, we shall talk about one of life's great pleasures: beer. This nectar of the gods has been something of a focus of mine, particularly given the explosion of the craft brewing industry and how that explosion has created an ever-increasing trademark apocalypse over the past decade. It is important context for the purposes of this post that you understand that the craft brewing industry, before it exploded but was steadily growing, had for years operated under a congenial and fraternal practice when it came to all things intellectual property. Everything from relaxed attitudes on trademarks, to an artistic bent when it came to beer labels, up to and including the regular willingness of industry rivals to regularly collaborate on specific concoctions: this was the basic theme of the industry up until the past decade or so. It was, frankly, one of the things that made craft beer so popular and fun.With big business, however, came corporatized mentalities. Suddenly, once small craft breweries doubled in size or more. Legal teams were hired and there was a rush to trademark all kinds of creative names. The label art, once the fun hallmark of the industry, became a wing of the marketing department. This is how, now in 2020, you get trade publications like Craft Brewing Business arguing that beer is one of the most all-encompassing products when it comes to intellectual property.To be fair, given the current climate, you can see some of the logic in the following:
Minnesota Cops Are Dismantling Criminal Organizations At Less Than $1,000 A Pop
Law enforcement officials love to defend asset forfeiture. While sidestepping the fact that it almost always directly enriches the agency doing the forfeiting, these officials love to claim it's an invaluable tool that helps cops dismantle dangerous criminal organizations.This is why they fight reporting requirements. No one knows you're just making poor people poorer unless you're required to report all of your forfeitures. Up in Minnesota -- like far too many other places around the country -- law enforcement officers roll Sheriff of Nottingham style. Unfortunately, there's no Robin Hood lurking in the forests patrolled by opportunistic officers.Here's state auditor Julie Blaha offering her opinion about forfeitures in Minnesota after digging into the data the agencies provided:
PayPal Blocks Purchases Of Tardigrade Merchandise For Potentially Violating US Sanctions Laws
Moderation at scale is impossible. And yet, you'd still hope we'd get better moderation than this, despite all the problems inherent in policing millions of transactions.Archie McPhee -- seller of all things weird and wonderful -- recently tried promoting its "tardigrade" line of goods only to find out PayPal users couldn't purchase them. Tardigrades are the official name for microscopic creatures known colloquially as "water bears." Harmless enough, except PayPal blocked the transaction and sent this unhelpful response:
Would You Believe That Infamous Copyright Troll Richard Liebowitz Is In Trouble Again?
I think if I stopped writing about other stuff, I could still fill Techdirt with the same number of posts just covering the problems facing copyright trolling lawyer Richard Liebowitz. Today we have a story of Liebowitz being in trouble, yet again. This is in the Chevrestt v. Barstool Sports case. We mentioned this one back in May, where a judge sanctioned Liebowitz and benchslapped him pretty significantly for failing to follow "simple" orders from the court. The judge in that case noted that in the case last year where Liebowitz lied about the death of his grandfather, that he had promised to attend some courses on how to better manage his legal practice. The judge asked for some details about whether or not he actually carried that out:
Daily Deal: The Complete 2020 Learn Linux Bundle
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $69.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Oracle Doesn't Buy TikTok, But Gets A Lucrative Hosting Deal, And Trump & Friends Will Pretend This Means Something
The TikTok saga, which was insanely stupid to begin with, kicked into overdrive last month when President Trump issued a blatantly unconstitutional executive order that was designed to force ByteDance to sell TikTok to an American company. We had all sorts of questions about this, but effectively ByteDance had until this week to find a buyer. While Microsoft was rumored for a while, late last night Microsoft announced that its proposal had been rejected and the only competitor left standing was... wait for it... Oracle. This led many to conclude that Oracle was buying TikTok. That is not the case. But hold on, we'll get there.There was one other serious bidder: Walmart. Last night the company claimed it was still interested in buying TikTok, but the White House rejected that plan, because it would have made it totally obvious that the "national security" pretense for demanding the sale was obvious bullshit. Nope, the White House said: it has to be sold to a "tech" company, so that the White House can stand by its totally unsubstantiated by evidence claims that TikTok's dancing teens represented a national security threat.So, with Walmart blocked, and Microsoft's deal not accepted, that left Oracle. But immediately the descriptions of Oracle's involvement were... weird. They very clearly did not say anything about "buying" TikTok. Instead, Oracle put out a very short press release saying that it will "serve as the trusted technology provider" to TikTok. That's not how you describe a sale.This is a hosting deal.Oracle will just host TikTok on its wannabe, way-behind-the-competition, cloud platform. And Trump and his cult-like supporters will pretend this actually accomplishes something. Oracle's executive suite has long been vocal Trump supporters, so this basically dumps a giant hosting contract into Oracle's lap. ByteDance will effectively still own TikTok, and Trump will pretend he's done something. For what it's worth, this is the second big Oracle cloud deal done in the last few months, with the previous one being with videoconferencing company Zoom.As Russell Brandom over at the Verge notes, this deal "accomplished nothing." ByteDance still owns TikTok (and, according to reports, retains full control over TikTok's algorithm). As former Yahoo and Facebook Chief Security Officer Alex Stamos points out, literally none of the concerns people have raised about TikTok (most of which were bogus in the first place) are solved by an Oracle hosting deal:
Over At Politico, The AT&T Monopoly Gives Tips On Fixing A Broadband Problem It Spent Thirty Years Creating
Every time legislation is looming that could threaten its broadband monopoly, AT&T attempts to get in front of it and steer the conversation away from subjects it doesn't want tackled by legislation. The biggest of those subjects is the lack of overall competition caused by sector monopolization, and the high prices, crappy customer service, and patchy availability that usually results. With COVID-19 resulting in folks realizing the importance of affordable broadband more than ever, it's becoming pretty clear that AT&T is worried somebody might just try to finally do something about it.You'd be hard pressed to find a company more responsible for this country's broadband shortcomings than AT&T, whose lobbyists work tirelessly to scuttle absolutely any attempt whatsoever to disrupt the mono/duopoly status quo. Which is why it's ironic to see AT&T CFO John Stankey publish an op-ed at Politico professing to have the cure for America's longstanding digital divide. Not too surprisingly, AT&T's solution for the problem is greater subsidization of companies like AT&T, a company that has already received countless billions in subsidies for fiber networks it almost always only partially deploys.Amusingly, most of Stankey's fixes are things AT&T has routinely lobbied against. Like here, where Stankey acknowledges that fixing the digital divide isn't something private industry can do alone:
Funniest/Most Insightful Comments Of The Week At Techdirt
We've got a double winner for first place this week, with one comment reaching the top of both the insightful and funny charts... as was its stated goal. It's justok responding to our post about students and parents gaming an AI grading system:
Get Your Otherwise Objectionable Gear Before The Senate Takes It Away!
Get your Otherwise Objectionable gear in the Techdirt store on Threadless »On Monday we released our line of Otherwise Objectionable gear in our store on Threadless and, the very next day, GOP Senators unveiled their latest attempt at truly stupid Section 230 reform: a bill that would remove those two critical words from the law. Of course, those who understand how important Section 230's moderation protections are to the internet will fight to prevent this bill from passing, and then there's the fact that it's pretty obviously unconstitutional — but while the fight continues, there's never been a better time to declare your Otherwise Objectionable status with pride.As usual, there's a wide variety of gear available in this and other designs — including t-shirts, hoodies, notebooks, buttons, phone cases, mugs, stickers, and of course the now-standard face masks. Check out all our designs and items in the Techdirt store on Threadless!
The Next Generation Of Video Game Consoles Could Be The Beginning Of GameStop's Death
Predictions about the death of video game retailer GameStop have been with us for at least a decade. There have been many reasons for such predictions, ranging from the emergence of digital downloaded games gobbling up market share to declines in retail stores generally. But there are two recent new headwinds that might frankly be the end of this once ubiquitous franchise as we know it.The first headwind is one common to all kinds of retailers currently: the COVID-19 pandemic. The pandemic is actually almost certainly worse for GameStop compared with retailers for other industries. As noted above, sales for the industry have long been trending towards digital downloads. Yes, there are still those out there who insist on buying physical media games, and in many cases there are good reasons for doing so, but the truth is that market was shrinking steadily for a long, long time. With the pandemic both shuttering many retail stores and keeping scared consumers out of those that remain open, the digital market share in the gaming industry has grown quickly. Whether anyone will want to go back to buying physical copies of games, new or used, is an open question.All of which might not ultimately matter, as the other headwind is the next generation of consoles being released with options for no built in disc drive at all.
Content Moderation Case Study: Pinterest's Moderation Efforts Still Leave Potentially Illegal Content Where Users Can Find It (July 2020)
Summary:Researchers at OneZero have been following and monitoring Pinterest's content moderation efforts for several months. The "inspiration board" website hosts millions of images and other content uploaded by users.Pinterest's moderation efforts are somewhat unique. Very little content is actually removed, even when it might violate the site's guidelines. Instead, as OneZero researchers discovered, Pinterest has chosen to prevent the content from surfacing by blocking certain keywords for generating search results.The problem, as OneZero noted, is that hiding content and blocking keywords doesn't actually prevent users from finding questionable content. Some of this content includes images that sexually exploit children.While normal users may never see this using Pinterest's built-in search tools, users more familiar with how search functions work can still access content Pinterest feels violates its guidelines, but hasn't actually removed from its platform. By navigating to a user's page, logged-out users can perform searches that seem to bypass Pinterest's keyword-blocking. Using Google to search the site -- instead of the site's own search engine -- can also surface content hidden by Pinterest.Pinterest's content moderation policy appears to be mostly hands-off. Users can upload nearly anything they want to with the company only deleting (and reporting) clearly illegal content. For everything else that's questionable (or potentially harms other users), Pinterest opts for suppression, rather than deletion.“Generally speaking, we limit the distribution of or remove hateful content and content and accounts that promote hateful activities, false or misleading content that may harm Pinterest users or the public’s well-being, safety or trust, and content and accounts that encourage, praise, promote, or provide aid to dangerous actors or groups and their activities,” Pinterest’s spokesperson said of the company’s guidelines.Unfortunately, users who manage to bypass keyword filters or otherwise stumble across buried content will likely find themselves directed to other buried content. Pinterest's algorithms surface content related to whatever users are currently viewing, potentially leading users even deeper into the site's "hidden" content.Decisions to be made by Pinterest:
The First Hard Case: Zeran V. AOL And What It Can Teach Us About Today's Hard Cases
A version of this post appeared in The Recorder a few years ago as part of a series of articles looking back at the foundational Section 230 case Zeran v. America Online. Since to my unwelcome surprise it is now unfortunately behind a paywall, but still as relevant as ever, I'm re-posting it here.They say that bad facts make bad law. What makes Zeran v. America Online stand as a seminal case in Section 230 jurisprudence is that its bad facts didn’t. The Fourth Circuit wisely refused to be driven from its principled statutory conclusion, even in the face of a compelling reason to do otherwise, and thus the greater good was served.Mr. Zeran’s was not the last hard case to pass through the courts. Over the years there have been many worthy victims who have sought redress for legally cognizable injuries caused by others’ use of online services. And many, like Mr. Zeran, have been unlikely to easily obtain it from the party who actually did them the harm. In these cases courts have been left with an apparently stark choice: compel the Internet service provider to compensate for the harm caused to the plaintiff by others’ use of their services, or leave the plaintiff with potentially no remedy at all. It can be tremendously tempting to want to make someone, anyone, pay for harm caused to the person before them. But Zeran provided early guidance that it was possible for courts to resist the temptation to ignore Section 230’s liability limitations – and early evidence that they were right to so resist.Section 230 is a law that itself counsels a light touch. In order to get the most good content on the Internet and the least bad, Congress codified a policy that is essentially all carrot and no stick. By taking the proverbial gun away from an online service provider’s proverbial head, Congress created the incentive for service providers to be partners in achieving these dual policy goals. It did so in two complementary ways: First, it encouraged the most beneficial content by insulating providers for liability arising from how other people used their services. Second, Congress also sought to ensure there would be the least amount of bad content online by insulating providers from liability if they did indeed act to remove it.By removing the threat of potentially ruinous liability, or even just the immense cost arising from being on the receiving end of legal threats based on how others have used their services, more and more service providers have been able to come into existence and enable more and more uses of their systems. It's let these providers resist unduly censoring legitimate uses of their systems in order to minimize their legal risk. And by being safe to choose what uses to allow or disallow from their systems, service providers have been free to allocate their resources more effectively to police the most undesirable uses of their systems and services than they would be able to if the threat of liability instead forced them to divert their resources in ways that might not be appropriate for their platforms, optimal, or even useful at all.Congress could of course have addressed the developing Internet with an alternative policy, one that was more stick than carrot and that threatened penalties instead of offering liability limitations, but such a law would not have met its twin goals of encouraging the most good content and the least bad nearly as well as Section 230 actually has. In fact, it likely would have had the opposite effect, eliminating more good content from the Internet and leaving up more of the bad. The wisdom of Congress, and of the Zeran court, was in realizing that restraint was a better option.The challenge we are faced with now is keeping courts, and Section 230’s critics, similarly aware. The problem is that the Section 230 policy balance is one that works well in general, but it is not always in ways people readily recognize, especially in specific cases with particularly bad facts. The reality is that people sometimes do use Internet services in bad ways, and these uses can often be extremely visible. What tends to be less obvious, however, is how many good uses of the Internet Section 230 has enabled to be developed, far eclipsing the unfortunate ones. In the 20-plus years since Zeran people have moved on from AOL to countless new Internet services, which now serve nearly 90 percent of all Americans and billions of users worldwide. Internet access has gone from slow modem-driven dial-up to seamless always-on broadband. We email, we tweet, we buy things, we date, we comment, we argue, we read, we research, we share what we know, all thanks to the services made possible by Section 230, but often without awareness of how much we owe to it and the early Zeran decision upholding its tenets. We even complain about Section 230 using services that Section 230 has enabled, and often without any recognition of the irony.In a sense, Section 230 is potentially in jeopardy of becoming a victim of its own success. It’s easy to see when things go wrong online, but Section 230 has done so well creating a new normalcy that it’s much harder to see just how much it has allowed to go right. Which means that when things do go wrong – as they inevitably will, because, while Section 230 tries to minimize the bad uses of online services, it’s impossible to eliminate them all—we are always at risk of letting our outrage at the specific injustice cause us to be tempted to kill the golden goose by upending something that on the whole has enabled so much good.When bad things happen there is a natural urge to do something, to clamp down, to try to seize control over a situation where it feels like there is none. When bad things happen the hands-off approach of Section 230 can seem like the wrong one, but Zeran has shown how it is still very much the right one.In many ways the Zeran court was ahead of its time: unlike later courts that have been able to point to the success of the Internet to underpin their decisions upholding Section 230, the Zeran court had to take a leap of faith that the policy goals behind the statute would be born out as Congress intended. It turned out to be a faith that was not misplaced. Today it is hard to imagine a world without all the benefit that Section 230 has ushered in. But if we fail to heed the lessons of Zeran and exercise the same restraint the court did then, such a world may well be what comes to pass. As we mark more than two decades since the Zeran court affirmed Section 230 we need to continue to carry its lessons forward in order to ensure that we are not also marking its sunset and closing the door on all the other good Section 230 might yet bring.
Apparently The New Litmus Test For Trump's FCC: Do You Promise To Police Speech Online
Last month we wrote about how President Trump withdrew the renomination of FCC Commissioner Mike O'Rielly just days after O'Rielly dared to [checks notes] reiterate his support for the 1st Amendment in a way that hinted at the fact that he knew Trump's executive order was blatantly unconstitutional. Some people argued the renomination was pulled for other reasons, but lots of people in DC said it was 100% about his unwillingness to turn the FCC into a speech police for the internet.While it seems quite unlikely that Trump can get someone new through the nomination process before the election, apparently they're thinking of nominating someone who appears eager to do the exact opposite: Nathan Simington, who wants the FCC to be the internet speech police so bad that he helped draft the obviously unconstitutional executive order in response to the President's freak-out at being fact checked.
Florida Sheriff's Predictive Policing Program Is Protecting Residents From Unkempt Lawns, Missing Mailbox Numbers
Defenders of "predictive policing" claim it's a way to work smarter, not harder. Just round up a bunch of data submitted by cops engaged in biased policing and allow the algorithm to work its magic. The end result isn't smarter policing. It's just more of the same policing we've seen for years that disproportionately targets minorities and those in lower income brackets.Supposedly, this will allow officers to prevent more criminal activity. The dirty data sends cops into neighborhoods to target everyone who lives there, just because they have the misfortune of living in an area where crime is prevalent. If the software was any "smarter," it would just send cops to prisons where criminal activity is the highest.The Pasco County Sheriff's Department thinks it's going to drive crime down by engaging in predictive policing. But no one's crippling massive criminal organizations or liberating oppressed communities from the criminal activity that plagues their everyday lives. Instead of smart policing that maximizes limited resources, Pasco County residents are getting this instead:
Daily Deal: The Complete Microsoft Azure Course Bundle
The Complete Microsoft Azure Course Bundle has 15+ hours of video content and 6 eBooks on Azure Cloud solutions, integration, and networks. You'll learn how to monitor and troubleshoot Azure network resources, manage virtual machines with PowerShell, use computer vision to detect objects and text in images, and much more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
White House Insisted It Had 16,000 Complaints Of Social Media Bias Turned Over To The FTC; The FTC Has No Record Of Them
One less noticed feature of the White House's anti-Section 230 executive order was the claim that the White House had over 16,000 complaints about social media bias that it would turn over to the FTC to help it... do something to those big mean social media companies:
Auto Industry Pushes Bullshit Claim That 'Right To Repair' Laws Aid Sexual Predators
A few years back, frustration at John Deere's draconian tractor DRM culminated in a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.Of course the problem isn't just restricted to John Deere. Apple, Microsoft, Sony, and countless other tech giants eager to monopolize repair have made a habit of suing and bullying independent repair shops and demonizing consumers who simply want to reduce waste and repair devices they own. This, in turn, has resulted in a growing push for right to repair legislation in countless states.To thwart these bills, companies have been ramping up the use of idiotic, fear mongering arguments. Usually these arguments involve false claims that these bills will somehow imperil consumer privacy, safety, and security. Apple, for example, tried to thwart one such bill in Nebraska by claiming it would turn the state into a "mecca for hackers."While there's been no shortage of bad faith arguments like this, the auto industry in Massachusetts has taken things to the next level. The state is contemplating the expansion of an existing state law that lets users get their vehicles repaired anywhere they'd like. In a bid to kill these efforts, the Alliance for Automotive Innovation, which represents most major automakers, has taken to running ads in the state falsely claiming that the legislation would aid sexual predators:The primary message of the ads is that if we allow people to more easily repair their vehicles, data from said vehicles will somehow find itself in the hands of rapists, stalkers, and other menaces. Granted actual experts have made it abundantly clear that this is utterly unfounded. The existing law requires that automakers use a non-proprietary diagnostic interface so any repair shop can access vehicle data using an ordinary OBD reader. It also makes sure that important repair information is openly accessible. The update to said law simply attempts to close a few loopholes in the existing law:
Cops And Paramedics Are Still Killing Arrestees By Shooting Them Up With Ketamine
Cops -- and the paramedics who listen to their "medical advice" -- are still killing people. A couple of years ago, an investigation by the Minneapolis PD's Office of Police Conduct Review found officers were telling EMS personnel to inject arrestees with ketamine to calm them down. This medical advice followed street-level diagnoses by untrained mental health unprofessionals who've decided the perfect cure for "excited delirium" is a drug with deadly side effects.People have been "calmed" to death by ketamine injections -- ones pushed by police officers and carried out by complicit paramedics. The cases reviewed by the OPC included potentially dangerous criminals like jaywalkers and disrespecters of law enforcement ("obstruction of justice"). Multiple recordings showed arrestees shot up with ketamine shortly before their hearts stopped or they ceased breathing.This incredibly dangerous practice of using ketamine to sedate arrestees hasn't slowed down. Instead, it has spread. What was a horrific discovery in Minneapolis is still day-to-day business elsewhere in the country. Cops and paramedics in Colorado are still putting peoples' lives at risk by using ketamine as their go-to sedative.
AB InBev And Patagonia Trademark Dispute Will Proceed To Trial
A little over a year ago, we discussed a lawsuit brought by Patagonia, famed West Coast clothier for all things outdoor lifestyle, against AB/InBev, famed macro-brewer. At issue was AB/InBev's decision to sell a Patagonia-branded beer line at pop up stores at ski resorts, the exact place where Patagonia clothing is quite popular. Within those stores, AB/InBev also sold Patagonia-branded clothing. Coupled with the beer maker's decision to do absolutely nothing with its "Patagonia" trademark for six years, you can see why Patagonia sought to invalidate AB/InBev's trademark. It's also understandable that the court ruled against AB/InBev's attempt to have the suit tossed last summer, with the absurd claim that the Patagonia brand for clothing isn't actually well-known at all. In the meantime, Patagonia asserted in filings that AB/InBev actually defrauded the USPTO when it got its trademark in the first place.Which brings us to the present, where the beer maker attempted to get at least some of the claims against it dismissed, arguing that the claims about defrauding the USPTO were simple clerical errors and that Patagonia had failed to protect its mark for too long. The court ruled in favor of Patagonia, meaning this will now go to trial. We'll start with the claims of Patagonia failing to protect its mark, which center around AB/InBev's registration for trademark indicating the company had been using "Patagonia" continually for five years.
Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)
Summary:Content moderation becomes even more difficult when you realize that there may be additional meaning to words or phrases beyond their most literal translation. One very clear example of that is the use of sarcasm, in which a word or phrase is used either in the opposite of its literal translation or as a greatly exaggerated way to express humor.In March of 2018, facing increasing criticism regarding certain content that was appearing on Twitter, the company did a mass purge of accounts, including many popular accounts that were accused of simply copying and retweeting jokes and memes that others had created. Part of the accusation for those that were shut down, was that there was a network of accounts (referred to as “Tweetdeckers” for the user of the Twitter application Tweetdeck) who would agree to mass retweet some of those jokes and memes. Twitter suggested that these retweet brigades were inauthentic and thus banned from the platform.In the midst of all of these suspensions, however, there was another set of accounts and content suspended, allegedly for talking about “self -harm.” Twitter has policies regarding glorifying self-harm which it had just updated a few weeks before this new round of bans.
FCC Formally Kills Rules That Would Have Brought Competition To The Cable Box
In early 2016, the cable industry quietly launched one of the most misleading and successful lobbying efforts in the industry's history. The target? A plan concocted by the former FCC that would have let customers watch cable TV lineups on third-party hardware. Given the industry makes $21 billion annually in rental fees thanks to its cable box hardware monopoly, the industry got right to work with an absolute wave of disinformation, claiming that the FCC's plan would put consumer data at risk, result in a "piracy apocalypse," and was somehow even racist (it wasn't).At one point, the industry even managed to grab the help of the US Copyright Office, which falsely claimed that more cable box competition would somehow violate copyright. Of course the plan had nothing to do with copyright, and everything to do with control, exemplifying once again that for the US Copyright Office, public welfare can often be a distant afterthought.Once in office, the Pai FCC dutifully got to work dismantling the Wheeler-era FCC proposal, coordinated with and justified by cable providers which promised their own "free market alternatives" would make the proposal irrelevant. More specifically, they promised that you'd be able to order Comcast or Spectrum's cable lineup through an app, making cable boxes irrelevant. But this promised alternative never showed up:
Addison Cain Really Doesn't Want You Watching This Video About Her Attempts To Silence Another Wolf Kink Erotica Author
Way back in the previous century of May, we wrote about a truly bizarre (but, not actually uncommon) story of someone abusing the DMCA to get a competitor's book disappeared. There was a lot of background, but the short version is that an author, who goes by the name Addison Cain (a pen name) and wrote a wolf-kink erotica book in the so-called "Omegaverse" realm (which is, apparently, a genre of writing involving wolf erotica and some tropes about the space), used the DMCA to get a competitor's book using similar tropes taken down. As we noted in our original article, both parties involved did some bad stuff. Cain was clearly abusing the DMCA to take down non-infringing works, while the person she was seeking to silence, going by the name Zoey Ellis (also a pen name) not only filed a (perfectly fine) DMCA 512(f) lawsuit in response, but also a highly questionable defamation lawsuit.There was a lot of back and forth in that story, but eventually the publisher, Blushing Books, agreed that there was no infringement and worked out some sort of settlement. Cain herself had been dismissed from the case on jurisdiction grounds. Anyway, last week, YouTuber Lindsay Ellis (no relation to Zoey Ellis, which, again, was a pen name) created a truly amazing one hour video analysis of the Cain/Ellis legal dispute that does a very good job covering many of the gory details of the dispute and how it eventually fizzled out. I know it's an hour of your time that is partly about wolf-kink erotica and partly about copyright law, but I still highly recommend it (though, maybe watch it at 2x speed):I mostly agree with the legal analysis, though I think she puts too much weight on the idea that a settlement has any precedential value (it doesn't).It appears that someone who does not want you to watch the video is Addison Cain / Rochelle Soto. Because soon after Ellis posted the video, she posted part of a legal threat from a lawyer claiming to represent Cain.
Could A Narrow Reform Of Section 230 Enable Platform Interoperability?
Perhaps the most de rigeur issue in tech policy in 2020 is antitrust. The European Union made market power a significant component of its Digital Services Act consultation, and the United Kingdom released a massive final report detailing competition challenges in digital advertising, search, and social media. In the U.S., the House of Representatives held an historic (virtual) hearing with the CEOs of Amazon, Apple, Facebook, and Google (Alphabet) on the same panel. As soon as the end of this month the Department of Justice is expected to file a “case of the century” scale antitrust lawsuit against Google. One competition policy issue that I’ve written about extensively is interoperability, and, while we’ve already seen significant proposals to promote interoperability, notably the 2019 ACCESS Act, I want to throw another idea into the hopper: I think Congress should consider amending Section 230 of the Communications Act to condition its immunity for large online intermediaries on the provision of an open, raw feed for independent downstream presentation.I know, I know. I can almost feel your fingers hovering over that big blue “Tweet” button or the “Leave a Comment” link -- but please, hear me out first.For those not already aware of (if not completely sick of) the active discussions around it, Section 230, originally passed as part of the Communications Decency Act, is an immunity provision within U.S. law intended to encourage internet services to engage in beneficial content moderation without fearing liability as a consequence of such action. It’s famously only 26 words long in its central part, so I’ll paste that key text in full: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”I’ll attempt to summarize the political context. Section 230 has come under intense, bipartisan criticism over the past couple of years as a locus of animosity related to a diverse range of concerns with the practices of a few large tech companies, in particular. Some argue that the choices made by platform operators are biased against conservatives; others argue that the platforms aren’t responsible enough and aren’t held sufficiently accountable. The support for amending Section 230 is substantial, although it is far from universal. The current President has issued an executive order seeking to catalyze change in the law; and the Democratic nominee has in the past bluntly called for it to be revoked. Members of Congress have introduced several bills that touch Section 230 (after the passage of one such bill, FOSTA-SESTA, in 2018), such as the EARN IT Act which would push internet companies to do more to respond to online child exploitation, to the point of undermining secure encryption. A perhaps more on-point proposal is the PACT ACT, which focuses on specific platform content practices; I’ve called it the best starting point for Section 230 reform discussions.Why is this one, short section of law so frequently used as a political punching bag? The attention goes beyond its hard law significance, revealing a deeper resonance in the modern-day notion of “publishing”. I believe this law in particular is amplified because the centralization and siloing of our internet experience has produced a widespread feeling (or reality) of a lack of meaningful user agency. By definition, social media is a business of taking human input (user generated content) and packaging it to produce output for humans, doubling the poignancy of human agency in some sense. The user agency gap spills over from the realm of competition, making it hard to evaluate content liability and privacy harms as entirely independent issues. In so many ways, the internet ecosystem is built on the idea of consumer mobility and freedom; also in so very many ways, that idea is bankrupt today.Yet debating whether online intermediaries for user content are “platforms” or “publishers” is a distraction. A more meaningful articulation of the underlying problem, I believe, is to say that we end users are unable to customize sufficiently the way in which the content is presented to us because we are locked into a single experience.Services like Facebook and YouTube operate powerful recommendation engines that are designed to sift through vast amount of potentially-desirable content and present the user with what they most value. This content is based on individual contextual factors such as what the user has been watching, and the broader signals of desirability such as engagement level from other users. As many critics allege, the underlying business model of these companies benefits by keeping users as engaged as possible, spending as much time on the platform as possible. That means recommending content that gets high engagement, even though human behavior doesn’t equate positive social value with high engagement (that’s the understatement of the day, there!).One of the interesting technical questions is how to design such systems to make them “better” from a social perspective. It’s the subject of academic research, in addition to ample industry investment. I’ve given YouTube credit in the past for offering some amount of transparency into changes it’s making (and the effects of those changes) to improve the social value of its recommendations, although I believe making that transparency more collaborative and systematic would help immensely. (I plan to expand on that in my next post!).Recommendation engines remain by and large black boxes to the outside world, including the users who receive their output. No matter how much credit you give individual companies for their efforts to balance properly their business model demands, optimal user experience, and improving social value, there are fundamental limits on users’ inability to customize, or replace, the recommendation algorithm that mediates the lion’s share of their interaction with the social network and the user-generated content that it hosts. We also can’t facilitate innovation or experimentation with presentation algorithms as things stand due to the lack of effective interoperability.And that’s why Section 230 gets so much attention -- because we don’t have the freedom to experiment at scale with things like Ethan Zuckerman’s Gobo.social project and thus improve the quality of, and better control, our social media experiences. Yes, there are filters and settings that users can change to customize their experience to some degree, likely far more than most people know. Yet, by design, these settings do not provide enough control to affect the core functioning of the recommendation engine itself.Thus, many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us -- choices that our limited downstream controls are insufficient to manage. That’s why it feels to some like they’re “publishing,” and doing a bad job of it at that. Despite massive investments by the service operators, it’s not hard to find evidence of poor outcomes of recommendations; see, e.g., YouTube recommending videos about upcoming civil war. And there are also occasional news stories of willful actions making things worse to add more fuel to the fire.So let’s create that space for empowerment by conditioning the Section 230 immunity on the provision of more raw, open access to their content experience so users can better control how to “publish” it to themselves by using an alternative recommendation engine. Here’s how to scale and design such an openness requirement properly:
Actual Facts Undercut Media's Narrative That Law Enforcement Task Force Broke Up A Multi-State Sex Trafficking Operation
If sex trafficking was actual traffic, people would rarely complain about congestion. It's not that it doesn't happen. It's that it doesn't happen with the frequency claimed by government officials in order to do things like dismantle Section 230 immunity or pursue baseless prosecutions against online ad services.But it always sounds like an omnipresent threat thanks to far too many news organizations who are apparently unwilling to challenge claims made by officials, much less dig into the details of trafficking stings. Almost without exception, big human/sex trafficking busts end with little to show for them but some standard solicitation arrests and a handful of jailed sex workers of legal age who haven't been "trafficked."There's a lot of blame to spread around for this turning from small-scale misguided hysteria into the focal point of legislation that harms the immunity granted to website and platform owners. But we can start with media, which hasn't met a sex trafficking story it isn't willing to hype, even when the facts don't jibe with the headlines. Michael Hobbes punches holes in the latest sex trafficking horror story covered nationwide -- one that contains very little horror and almost no sex trafficking.This is how it landed on people's virtual doorsteps following the government's press release:
Esports Milestone: Guild Esports Looks For London Stock Exchange Listing
For years now, we've covered various milestones the esports industry has hit as it has exploded in popularity. Once relegated primarily to a few overseas markets, the past decade has seen an acceleration of the industry hitting the mainstream, from features in sports media on participants, college scholarships for esports, IRL leagues getting in the game, and even the betting markets opening up to esports gambling. While this trend began long before the world's current predicament, it's also true that the COVID-19 pandemic, which shuttered live sports for months, acted as a supercharger for all of this.All of which contributed to the latest milestone the esports industry has managed to hit, as famed footballer David Beckham's Guild Esports franchise has announced it plans to get listed on the London Stock Exchange.
French Government To Make Insulting Mayors A Criminal Offense
French government entities continue to clamp down on speech. Following a terrorist attack on a French satirical newspaper, government leaders vowed to double down on protecting controversial speech. The govenment then fast-tracked several prosecutions under its anti-terrorism laws, which included arresting a comedian for posting some anti-semitic content. It further celebrated its embrace of free speech by arresting a man for mocking the death of three police officers.A half-decade later, that same commitment to protecting speech no one might object to continues. The country's government passed a terrible hate speech law that would have allowed law enforcement to decide what content was acceptable (and what was arrestable.) Fortunately for its citizens, the country's Constitutional Court decided the law was unlawful and struck down most of it roughly a month later.But that's not the end of bad speech laws in France. Government officials seem to have an unlimited amount of bad ideas. Some government officials are being hit with far more than objectionable words. Assaults of French mayors continue to occur at the rate of about once a day. Mayors assaulted and unassaulted have asked the French government to do more to protect them from these literal attacks.The government has responded. And it's not going to make mayors any more popular or make them less likely to be physically attacked.
If We're So Worried About TikTok, Why Aren't We Just As Worried About AdTech And Location Data Sales?
We've noted a few times how the TikTok ban is largely performative, xenophobic nonsense that operates in a bizarre, facts-optional vacuum.The biggest pearl clutchers when it comes to the teen dancing app (Josh Hawley, Tom Cotton, etc.) have been utterly absent from (or downright detrimental to) countless other security and privacy reform efforts. Many have opposed even the most basic of privacy rules. They've opposed shoring up funding for election security reform. Most are utterly absent when we talk about things like our dodgy satellite network security, the SS7 cellular network flaw exposing wireless communications, or the total lack of any meaningful privacy and security standards for the internet of broken things.As in, most of the "experts" and politicians who think banning TikTok is a good idea don't seem to realize it's not going to genuinely accomplish much in full context. Chinese intelligence can still glean this (and much more data) from a wide variety of sources thanks to our wholesale privacy and security failures on countless other fronts. It's kind of like banning sugary soda to put out a forest fire, or spitting at a thunderstorm to slow its advance over the horizon.Yet the latest case in point: Joseph Cox at Motherboard (who has been an absolute wrecking ball on this beat) discovered that private intel firms have been able to easily buy user location data gleaned from phone apps, allowing the tracking of users in immensely granular fashion:
If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
What should platforms like Facebook or YouTube do when users post speech that is technically legal, but widely abhorred? In the U.S. that has included things like the horrific video of the 2019 massacre in Christchurch. What about harder calls – like posts that some people see as anti-immigrant hate speech, and others see as important political discourse?Some of the biggest questions about potential new platform regulation today involve content of this sort: material that does not violate the law, but potentially does violate platforms’ private Terms of Service (TOS). This speech may be protected from government interference under the First Amendment or other human rights instruments around the world. But private platforms generally have discretion to take it down.The one-size-fits-all TOS rules that Facebook and others apply to speech are clumsy and unpopular, with critics on all sides. Some advocates believe that platforms should take down less content, others that they should take down more. Both groups have turned to courts and legislatures in recent years, seeking to tie platforms’ hands with either “must-remove” laws (requiring platforms to remove, demote, or otherwise disfavor currently lawful speech) or “must-carry” laws (preventing platforms from removing or disfavoring lawful speech).This post lays out what laws like that might actually look like, and what issues they would raise. It is adapted from my ”Who Do You Sue” article, which focuses on must-carry arguments.Must-carry claims have consistently been rejected in U.S. courts. The Ninth Circuit’s Prager ruling, for example, said that a conservative speaker couldn’t compel YouTube to host or monetize his videos. But must-carry claims have been upheld in Poland, Italy, Germany, and Brazil. Must-remove claims, which would require platforms to remove or disfavor currently legal speech on the theory that such content is uniquely harmful in the online environment, have had their most prominent airing in debates about the UK’s Online Harms White Paper.The idea that major, ubiquitous platforms that serve as channels for third party speech might face both must-carry and must-remove obligations is not new. We have long had such rules for older communications channels, including telephone, radio, television, and cable. Those rules were always controversial, though, and in the U.S. were heavily litigated.On the must-remove side, the FCC and other regulators have prohibited content in broadcast that would be constitutionally protected speech in a private home or the public square. On the must-carry side, the Supreme Court has approved some carriage obligations, including for broadcasters and cable TV owners.Those older communications channels were very different from today’s internet platforms. In particular, factors like broadcast “spectrum scarcity” or cable “bottleneck” power, which justified older regulations, do not have direct analogs in the internet context. But the Communications law debates remain highly relevant because, like today’s arguments about platform regulation, they focus on the nexus of speech questions and competition questions that arise when private entities own major forums for speech. As we think through possible changes in platform regulation, we can learn a lot from this history.In this post, I will summarize some possible regulatory regimes for platforms’ management of lawful but disfavored user content, like the material often restricted now under Terms of Service. I will also point out connections to Communications precedent.To be clear, many of the possible regimes strike me as both unconstitutional (in the U.S.) and unwise. But spelling out the options so we can kick the tires on them is important. And in some ways, I find the overall discussion in this post encouraging. It suggests to me that we are at the very beginning of thinking through possible legal approaches.Many models discussed today are bad ones. But many other models remain almost completely unexplored. There is vast and under examined territory at the intersection of speech laws and competition laws, in particular. Precedent from older Communications law can help us think that through. This post only begins to mine that rich vein of legal and policy ore.In the first section of this post, I will discuss five possible approaches that would change the rules platforms apply to their users’ legal speech. In the second (and to me, more interesting) section I will talk about proposals that would instead change the rulemakers – taking decisions out of platforms’ hands, and putting them somewhere else. These ideas are often animated by thinking grounded in competition policy.Changing the Rules
If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
What should platforms like Facebook or YouTube do when users post speech that is technically legal, but widely abhorred? In the U.S. that has included things like the horrific video of the 2019 massacre in Christchurch. What about harder calls – like posts that some people see as anti-immigrant hate speech, and others see as important political discourse?Some of the biggest questions about potential new platform regulation today involve content of this sort: material that does not violate the law, but potentially does violate platforms’ private Terms of Service (TOS). This speech may be protected from government interference under the First Amendment or other human rights instruments around the world. But private platforms generally have discretion to take it down.The one-size-fits-all TOS rules that Facebook and others apply to speech are clumsy and unpopular, with critics on all sides. Some advocates believe that platforms should take down less content, others that they should take down more. Both groups have turned to courts and legislatures in recent years, seeking to tie platforms’ hands with either “must-remove” laws (requiring platforms to remove, demote, or otherwise disfavor currently lawful speech) or “must-carry” laws (preventing platforms from removing or disfavoring lawful speech).This post lays out what laws like that might actually look like, and what issues they would raise. It is adapted from my ”Who Do You Sue” article, which focuses on must-carry arguments.Must-carry claims have consistently been rejected in U.S. courts. The Ninth Circuit’s Prager ruling, for example, said that a conservative speaker couldn’t compel YouTube to host or monetize his videos. But must-carry claims have been upheld in Poland, Italy, Germany, and Brazil. Must-remove claims, which would require platforms to remove or disfavor currently legal speech on the theory that such content is uniquely harmful in the online environment, have had their most prominent airing in debates about the UK’s Online Harms White Paper.The idea that major, ubiquitous platforms that serve as channels for third party speech might face both must-carry and must-remove obligations is not new. We have long had such rules for older communications channels, including telephone, radio, television, and cable. Those rules were always controversial, though, and in the U.S. were heavily litigated.On the must-remove side, the FCC and other regulators have prohibited content in broadcast that would be constitutionally protected speech in a private home or the public square. On the must-carry side, the Supreme Court has approved some carriage obligations, including for broadcasters and cable TV owners.Those older communications channels were very different from today’s internet platforms. In particular, factors like broadcast “spectrum scarcity” or cable “bottleneck” power, which justified older regulations, do not have direct analogs in the internet context. But the Communications law debates remain highly relevant because, like today’s arguments about platform regulation, they focus on the nexus of speech questions and competition questions that arise when private entities own major forums for speech. As we think through possible changes in platform regulation, we can learn a lot from this history.In this post, I will summarize some possible regulatory regimes for platforms’ management of lawful but disfavored user content, like the material often restricted now under Terms of Service. I will also point out connections to Communications precedent.To be clear, many of the possible regimes strike me as both unconstitutional (in the U.S.) and unwise. But spelling out the options so we can kick the tires on them is important. And in some ways, I find the overall discussion in this post encouraging. It suggests to me that we are at the very beginning of thinking through possible legal approaches.Many models discussed today are bad ones. But many other models remain almost completely unexplored. There is vast and under examined territory at the intersection of speech laws and competition laws, in particular. Precedent from older Communications law can help us think that through. This post only begins to mine that rich vein of legal and policy ore.In the first section of this post, I will discuss five possible approaches that would change the rules platforms apply to their users’ legal speech. In the second (and to me, more interesting) section I will talk about proposals that would instead change the rulemakers – taking decisions out of platforms’ hands, and putting them somewhere else. These ideas are often animated by thinking grounded in competition policy.Changing the Rules
FISA Court Decides FBI, NSA Surveillance Abuses Should Be Rewarded With Fewer Restrictions On Searching 702 Collections
A heavily-redacted opinion has been released by the FISA Court. Even with the redactions, it's clear the NSA and FBI have continued to abuse their Section 702 privileges. But rather than reject the government's arguments or lay down more restrictions on the use of these collections, the court has decided to amend the rules to make some of these abuses no longer abuses, but rather the new normal. This means there are now fewer protections shielding Americans from being swept up by the NSA collections or targeted using this data by the FBI.Elizabeth Goitein of the Brennan Center has a good rundown of the abuses and the court's response. She points out in her Twitter thread that some of this can be traced back to the reforms enacted by the USA Freedom Act, which codified some restrictions but didn't go far enough to prevent future abuses or mandate better reporting of rule breaking by these agencies.The opinion [PDF] notes the NSA found it too difficult to comply with a Section 702 requirement that at least one end of targeted communications involve someone outside of the United States. When faced with following this requirement and possibly losing access to communications it wanted, it simply chose to ignore the requirement.
Daily Deal: The Pro Photography And Photoshop 20 Course Bundle
Grab your camera, capture amazing photos, and learn to process them in Photoshop, Lightroom, and GIMP with the Pro Photography and Photoshop 20 Course Bundle. You'll learn how to take your camera skills to the next level, and then how to process those photos to truly capture the look you were envisioning. From layers and filters to levels and curves, you'll come to grips with essential photo editing concepts, and refine your skills over several courses. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
GOP Senators Release Latest Truly Stupid Section 230 Reform Bill; Would Remove 'Otherwise Objectionable'; Enable Spamming
Honestly, you'd think that the Senate might have a few more important things to be working on right now than introducing what has to be the... what... 8th bill to try to rewrite Section 230 of the Communications Decency Act this year? Either way, three Senators on the Commerce Committee have released yet another truly ridiculous attempt at reforming Section 230. Senators Roger Wicker, Lindsey Graham, and Marsha Blackburn are the three clueless Senators behind the ridiculously named "Online Freedom and Viewpoint Diversity Act."Before we dig deeper, I should remind you that Marsha Blackburn hates net neutrality with the passion of a thousand suns. Hell, she even put together this lovely video nearly a decade ago where she sings the praises of the open internet, and companies like "Facebook, YouTube, Twitter." And then she says: "There has never been a time that a consumer needed a federal bureaucrat to step in to intervene."So, anyway, federal legislator Marsha Blackburn, along with Senators Wicker and Graham have decided to "intervene" in order to attack Facebook, YouTube and Twitter, because those companies are moderating their private property in a way that these Senators don't like. It seems that they want... a bit more... what's the word I'm thinking of? Oh, right, "neutrality" in how content moderation works.Blackburn's press release quote is particularly hilarious after what she said about net neutrality:
Pai FCC Ignored Falsely Inflated Broadband Numbers To Pat Itself On The Back
We've noted more than once that the Donald Trump, Ajit Pai FCC isn't much for this whole accurate data thing. This FCC can routinely be found parroting inaccurate lobbyist claims on a wide variety of subjects, whether that's the rate of recent broadband investment, or the number of people just out of reach of affordable broadband. As such, it's not uncommon to find the FCC basing policy decisions on junk data; most recently exemplified by its rubber stamping of the job and competition eroding Sprint/T-Mobile merger (which was approved before FCC staff had seen ANY data).Last year, Pai's FCC tried to claim that the number of U.S. residents without access to fixed broadband (25Mbps downstream, 3Mbps upstream as per the FCC) dropped from 26.1 million people at the end of 2016 to 19.4 million at the end of 2017. Pai's agency attributed this improvement to the agency "removing barriers to infrastructure investment," which is code for gutting most meaningful consumer protections at lobbyist behest. But last year we noted that a good chunk of that improvement was not only thanks to policies Pai historically opposed (community fiber broadband networks and fiber build out conditions affixed to the 2015 AT&T DirecTV merger), but to administrative error.Consumer groups also pointed out that a big reason for that shift was a major false claim on the part of a smaller ISP named BarrierFree. BarrierFree had dramatically overstated its coverage areas in Form 477 data submitted to the FCC, resulting in broadband improvement numbers overstated by millions of Americans. In a follow up report this week, the FCC quietly acknowledged that the FCC was long aware of the "mistake" but published the falsely inflated numbers anyway:
The Government Has Been Binging On Classification. Senators Say It's Time To Start Purging.
Senators Ron Wyden and Jerry Moran have published an op-ed at Just Security detailing the government's overuse of classification (and distaste for declassification) -- a practice that uses our tax dollars to keep secrets from us. Overclassification is a problem. It has been a problem for decades, but it keeps getting worse. Multiple government agencies spend billions every year marking things "classified" and then forgetting the documents they've classified still exist.
Game Creator Has His YouTube Video Of Game Demonetized Over Soundtrack He Also Created
Content moderation, whether over social or intellectual property issues, is impossible to do well. It just is. The scale of content platforms means that automated systems have to do most of this work and those automated systems are always completely rife with avenues for error and abuse. While this goes for takedowns and copyright strikes, it is also the case for demonetization practices for the big players like YouTube.But how bad are these systems, really? Well, take, for instance, the case of a man who created a video game, and the soundtrack for that game, having his YouTube videos of the game demonetized due to copyright.
Prosecutor Who Used Bite Mark Analysis Even The Analyst Called 'Junk Science' Can Be Sued For Wrongful Jailing Of Innocent Woman
A lot of stuff that looks like science but hasn't actually been subjected to the rigors of the scientific process has been used by the government to wrongly deprive people of their freedom. As time moves forward, more and more of the forensic science used by law enforcement has been exposed as junk -- complicated-looking mumbo-jumbo that should have been laughed out of the crime lab years ago.Tire marks, bite marks, hair analysis… even the DNA "gold standard" has come under fire. If it's not the questionable lab processes, it's the testimony of government expert witnesses who routinely overstated the certainty of their findings.Bite mark analysis has long been considered junk science. But for a far longer period, it was considered good science -- a tool to be used to solve crimes and lock up perps. This case, handled by the Third Circuit Court of Appeals, contains an anomaly: the bite mark expert who helped wrongly convict a woman of murder -- taking away eleven years of her life -- actually stated on record that bite mark analysis is junk science.This case starts with some DNA testing. Supposedly, this is as scientific as it gets. But the prosecutor appeared to have wanted to pin this crime on Crystal Dawn Weimer. So investigators chose to ignore what the DNA evidence told them. Investigating the murder of Curtis Haith, who had been beaten and shot in the face, investigators started talking to party guests who had been at Haith's apartment the night before. They zeroed in on Weimer even when available evidence seemed to point elsewhere.From the decision [PDF]:
Astronomers Say Space X Astronomy Pollution Can't Be Fixed
We recently noted how the Space X launch of low orbit broadband satellites is not only creating light pollution for astronomers and scientists, but captured U.S. regulators, eager to try and justify rampant deregulation, haven't been willing to do anything about it. While Space X's Starlink platform will create some much needed broadband competition for rural users, the usual capacity constraints of satellite broadband mean it won't be a major disruption to incumbent broadband providers. Experts say it will be painfully disruptive to scientific study and research, however:While Space X says it's taking steps to minimize the glare and "photo bombing" capabilities of these satellites (such as anti-reflective coating on the most problematic parts of the satellites), a new study suggests that won't be so easy. The joint study from both the National Science Foundation's NOIRLab and the American Astronomical Society (AAS) found that while Space X light pollution can be minimized somewhat, it won't be possible to eliminate:
Trump Gets Mad That Twitter Won't Take Down A Parody Of Mitch McConnell; Demands Unconstitutional Laws
I'm still perplexed by Trumpian folks insisting that the President is a supporter of free speech (or the Constitution). It's quite clear that he's been a huge supporter of censorship over the years. The latest example is, perhaps, the most bizarre (while also being totally par for the course with regards to this President). For unclear reasons, the President has retweeted someone with fewer than 200 followers, who posted a picture of Senate Majority Leader Mitch McConnell in traditional Russian soldier garb... while complaining that Twitter won't take that image down, while it has "taken down" manipulated media from his supporters.The tweet says:
...192193194195196197198199200201...