Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-10-04 22:02
What If The Era Of Video Game Mashups Is About To Begin?
Search the Techdirt pages for the term "mashup" and you will see a metric ton of ink spilled on the topic. Most of those posts deal with the copyright implications of mashup creators, be they for music or literature. It is, frankly, a tortured landscape largely littered with the metaphorical bodies of artists creating new and interesting artwork by combining previous works to create something new. Music is the easiest entry point for those not in the know. Take the music from one song and lyrical output from another, put them together, and you get something new and interesting. When done well, the results are mind-blowing. As are the constant attacks from original creators and rightsholders that seem to see such mashup work as a threat to the originals.But what about the video game space? Go poke around for terms like "video game mashup" and you'll get plenty of results, but all of them discussing theoretical mashups. You can get a Cracked article entitled "4 Video Game Mashups Too Awesome To Exist", or a GameRant post entitled "5 Video Game Mashups That Would Blow Our Minds". Even in those headlines you get a common theme: we wish we could have these things, but they don't exist.Well, at least one does. Crusader Blade is a mashup mod combining Paradox Interactive's Crusader Kings 3 and TaleWorlds Entertainment's Mount & Blade 2: Bannerlord. Confused? Well, this will take some brief background.Mount & Blade puts the player in a medieval fictional world and allows them to hack and slash their way to glory, fighting battles alongside the rest of their army, with some RPG elements thrown in. The battles featuring hundreds of combatants are really the sell for the game, however. Crusader Kings 3 is a medieval grand strategy game focused on diplomacy, intrigue, relationship management, managing a family dynasty, and warring with other nearby kingdoms and realms. My listing warfare last was not coincidence. The warring part of the game is extremely barebones by modern standards, literally just showing an avatar for an army that marches and then fights to a mathematical outcome. Think of the battle sequence like one in a Civilization game. It's not an afterthought, but it's close to one.What this mod has done is make owners of both games able to seamlessly use both games to play both the grand strategy portions of Crusader Kings and conduct actual battles using Mount & Blade. Yes, seriously.
Austin Homeowners Association Pitches In To Help Cops Kill A Guy Over Uncut Grass
This is one of the most horrendous -- and one of the most American stories -- I have ever read. It encompasses a lot of distinctly American issues, ranging from law enforcement violence to the disturbing ability of private individuals and entities to reliably summon law enforcement and bring about the destruction of others.It starts, as so many stories about police violence do, with some needlessly exonerative reporting by journalists -- in this case by Elisha Fieldstadt of NBC News.
Chip Shortages Mar Starlink's Long-Awaited Exit From Beta
Elon Musk's Starlink has finally exited beta, but chip shortages may mar the low-orbit broadband satellite venture's big day. The company has technically stopped calling Starlink a beta product, but warns in a new FAQ on its website that users expecting shipments of their new satellite dishes may be waiting a while:
Copyright Troll Richard Liebowitz Suspended From Practicing Law In New York
We have a loooooooooong list of stories about copyright troll Richard Liebowitz and his never-ending antics in court. As we noted earlier this summer, he's been getting suspended from practicing law in courts all around the country (while also piling up more and more sanctions). And while he'd already been suspended in various NY federal courts, he's now been entirely suspended from practicing law in the state of New York. The NY state courts were following up on the federal court in the Southern District of New York suspending Liebowitz, and sought to impose a reciprocal suspension.The ruling from earlier this week lays out the details of just a very small fraction of Liebowitz's long history of lying to courts and other misbehavior. It then responds to Liebowitz's attempt to wriggle out of this suspension by arguing that he was simply advocating for his clients as best he could, and also that his initial suspension was temporary and "he has not had a full and fair opportunity to litigate the matter in that forum."The judges reviewing his case are... unimpressed:
Fifth Circuit Appeals Court Strips Immunity For Officers Who Arrested A Journalist For Asking Questions
The Fifth Circuit Court of Appeals has finally found some law enforcement officers not worthy of qualified immunity. The First and Fourth Amendment violations were too egregious to be ignored, even with a lack of precedential decisions on point to work with.In January 2018, Laredo (Texas) police officers arrested a local journalist -- Priscilla Villareal, a.k.a. Lagordiloca -- for asking a couple of questions of (and receiving a couple of answers from) another Laredo PD officer. Villareal doesn't work for any local press outlet. Instead, she broadcasts directly to 120,000 Facebook followers, often adding commentary to ongoing events.The local cops don't like her. So, they decided to have her arrested under Texas Penal Code §39.06(c), which says:
Daily Deal: The Complete 2021 Cybersecurity Super Bundle
The Complete 2021 Cybersecurity Super Bundle has 24 courses designed to help you become a cybersecurity expert. You'll learn about network security, database security, cloud security, and project management security procedures. It's on sale for $70. Use the code SAVE15NOV to save an additional 15% off this bundle and sitewide on other deals.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Senators Tillis And Leahy Raise The Alarm About Judge Albright's Patent Forum Selling In Waco
At times I've been at odds with Senators Pat Leahy and Thom Tillis regarding their view on intellectual property (though Leahy has a good history on patent law -- Tillis... not so much). However, kudos to both of them for recognizing a very, very real problem in the way in which Judge Alan Albright has been engaged in what's been called jurisdiction selling.If you don't recall, Judge Albright, who was a patent litigator before being appointed to the bench in Waco, Texas, went on tour advertising that patent plaintiffs should file in his district court (where he is the only judge). And, this resulted in a ton of cases all being filed there. And despite Supreme Court precedent that says judges need to be willing to transfer patent cases to proper venues, Albright has been thumbing his nose at higher courts and seeming to do everything he can to keep cases in his court.And now, both Tillis and Leahy are ringing appropriate alarm bells over Albright's activities. Leahy and Tillis together have sent a letter to Supreme Court Chief Justice John Roberts to call out Judge Albright's behavior. It is not often that you see two Senators (who lead the IP subcommittee, no less) sending a letter to the Chief Justice to accuse a district court judge of being up to no good. It's quite a letter.
U.S. Broadband Growth Slows As the Profit Party Grinds To A Halt
For years we've watched major cable TV providers lose traditional cable TV subscribers hand over fist to cheaper, more flexible streaming alternatives. It was a trend that only accelerated during COVID. Don't feel too badly for companies like Charter and Comcast however; the companies' growing monopoly over faster fixed-line broadband across huge swaths of the country have allowed them to recoup their pound of flesh via broadband fees (or unnecessary usage caps) without much in the way of repercussion.But there's signs that the cable broadband party could be slowing down. Both Comcast and Charter (Spectrum) reported the usual number of cable TV subscriber losses, but also reported significantly fewer broadband subscribers than usual:
King.com Opposes 'Candy Crunch' Trademark Application... From Actual Fruit Varietal Maker
King.com and its flagship product, mobile game Candy Crush, have made it onto our pages several times in the past. The most common reason for that is that King appears to enjoy playing trademark bully. Fighting with the folks behind hit game Banner Saga, not to mention picking fights with any other game maker that uses the word "candy" in their titles, have become the norm. Notably, some of the time, when there is a severe public backlash over its antics, King has also shown that it is capable of running away from such disputes.But if you thought that all of this would mean that King would somehow soften its bullying ways, think again. King recently opposed a trademark application brought by International Fruit Genetics over its application for "Candy Crunch". What does IFG do, exactly? Well, essentially what it sounds like it does: breeding of fruit variants.
Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021)
Summary: One of the many challenges with content moderation is the flexibility of language. When applying blocklists — a list of prohibited terms considered not appropriate for the platform — moderators need to consider innocuous uses of words that, when removed from their context, appear to be violations of the platform’s terms of use.Multiple platforms have run into the phenomenon known as the "Scunthorpe problem." In this famous case, a town whose name no one would ever mistake for offensive was deemed offensive by moderation blocklists simply because within the name of the town is the word “cunt” which many blocklists forbids.Deploying automated blocklists can be even more challenging when dealing with specialized or niche content, which may use certain terms that are offensive outside of this specific context, but are essential to discussing and understanding the relevant subject matter. A paleontologists' conference was derailed when the moderation blocklist made it impossible for participants to use words like "bone," "pubic," "stream," and "beaver."Facebook has worked continuously to refine its moderation processes, but it still occasionally makes the wrong call when it comes to their blocklists. In January 2021, residents of (and visitors to) a Devon, England landmark were surprised to find their posts and comments vanishing from the site. After a little investigation, it became clear Facebook was deleting posts containing references to the landmark known as Plymouth Hoe.In addition to being the name of a common garden tool (more on that in a moment), "hoe" also refers to a "sloping ridge shaped like an inverted foot or heel," such as Plymouth Hoe, which is known locally as the Hoe. Users were temporarily forced to self-censor the harmless term to avoid moderation, either by adding unnecessary punctuation or dropping the "h." It appeared Facebook's automated processes believed these comments and posts were using a derogatory term for a romantic partner who is only in a relationship to better their own financial position.Facebook soon apologized for the moderation error and stated it was "taking steps to rectify the error" and figure out what caused the mistaken moderation in the first place. Problem solved?Not really.The same problem popped up again, this time affecting a New York gardening group. WNY Gardeners, a group with more than 8,000 members, is the latest to be affected by Facebook's "hoe" pruning. A member responded to the prompt "most loved & indispensable weeding tool" with "Push pull hoe!" Not long after that, the member was informed by Facebook that the comment violated the site's policy on bullying and harassment.Company Considerations:
Data Privacy Is The Price Of The Latest Antitrust Proposals
In the wake of data breaches at Target and Equifax where hackers compromised the personal information of millions of Americans and the Cambridge Analytica scandal, customers and policymakers are increasingly worried about the privacy and security of our personal information online. Unfortunately, policymakers often confuse these privacy and data security concerns with broader anti-tech fervor against America’s leading technology businesses. But simply put, kitchen-sink anti-tech responses could exacerbate, not ease, concerns regarding data privacy and security.Sen. Klobuchar’s American Innovation and Choice Online Act proposed in the Senate this month provides a clear example of how “solutions” driven by animosity towards “Big Tech” could undermine consumer privacy. Lawmakers who are currently pushing antitrust proposals to attack tech businesses are creating a scenario where companies might be unable to undertake the privacy and security features that their consumers trust and rely on.This latest legislation would undermine existing privacy features and lead to more risky sharing with third-parties which has been at the heart of many privacy scandals. By making it illegal for the tech giants covered by the law to “restrict or impede” a business user’s access to data created through the platform or to limit portability, the result is simple—this will likely undermine companies’ attempts to improve consumer privacy such as Apple’s new App Tracking Transparency. In most cases, this bill also requires companies to share their data with rivals, even those that might have ill-intentions against the company, its consumers, or even the United States.Thanks to the requirements in the American Innovation and Choice Online Act, malicious businesses, including foreign companies, could exploit its data portability loophole and gain access to user information. It opens up businesses to the very actions at the heart of other previous data privacy scandals and dilutes their ability to respond with what consumers want—better security and privacy options. And as a result, Klobuchar’s antitrust proposal would likely harm users’ privacy online and create more harm to consumers than the current tenuous claims about tech giants' market behavior.This bill won’t just harm consumer privacy, it’ll harm small businesses to boot. Under the Klobuchar bill, Amazon, Google, Microsoft, and Apple would not be able to limit the use of data by those with questionable or unethical data practices and would be greatly limited in their ability to remove fraudulent apps or other awful actors from their app stores.These companies would generally have two options: either accept all sellers into their phones and app stores, regardless of poor data privacy and security standards, distasteful products, and customer service quality, or end these programs altogether to avoid accusations of self-preferencing and significant consequences associated with it under Klobuchar’s bill.Both options would mean consumers likely lose the benefits of knowing that only approved apps can access their data, while developers of these apps and other small businesses that use these services would not benefit from the consumer trust these marketplaces currently provide.The interoperability requirements of the bill also fail to truly provide the users themselves with the increased options. Instead, they make user data and to whom it can be ported further subject to the decisions of companies and third-parties. New opportunities for data portability between platforms can provide users more control and lower the cost of switching between services. Take the Data Transfer Project, which allows users to choose to transfer certain data like photos between project member services. This gives users more options for where they can choose to keep such information. But the Klobuchar proposal structurally doesn’t help users and would require companies to provide portability and interoperability to other companies,not to the users themselves.Proponents of Klobuchar’s bill will likely point out that the bill establishes an affirmative defense to avoid these requirements if it would undermine privacy. However, this still would not solve all the concerns. Companies that pursue that route would face a high burden in court and the high costs associated with litigating such a case. As a result, many companies will favor compliance and handing over consumer data rather than risk the penalties if an affirmative defense fails, even if they believe consumer privacy may ultimately be undermined. In fact, the inclusion of an affirmative defense wouldn’t even be sufficient to overcome the additional privacy problems that Klobuchar’s bill would create.Rather than resolving data privacy concerns, antitrust proposals like Senator Klobuchar’s could make keeping our information safe online an impossible task for tech companies. Policymakers should not have consumers and small businesses pay an unfair price and lose privacy protections just so they can go after big tech companies.Jennifer Huddleston is Policy Counsel at NetChoice where she focuses on emerging technology issues like privacy, competition policy, and intermediary liability.
Seventh Circuit Says Riley Doesn't Apply To Searches Of Parolees' Phones
In 2014, the Supreme Court extended Fourth Amendment coverage to the contents of cell phones. Prior to that ruling, cops had successfully argued that searching the contents of someone's cell phone was no different than searching the contents of their pockets when arresting them. Claims -- bad ones -- were made about "officer safety" and, for the most part, courts tended to agree. If the pants pocket argument didn't work, cell phones -- with their wealth of personal information and private communications -- were analogized as the digital equivalent of car trunks or address books.This protection only extends so far, as one parolee has discovered. The key is in the wording of the Supreme Court's Riley decision, which apparently doesn't cover someone being arrested for parole violations. But the Seventh Circuit Court of Appeals decision [PDF] seems to undercut some of the findings of the Riley decision, which recognized the personal nature of these computers capable of being carried in someone's pocket. In doing so, it appears to say this enhanced expectation of privacy simply doesn't apply to anyone on parole or probation.The opening paragraph appears to show something that oversteps the bounds of the search incident to arrest, at least in relation to the parole violation.
Internet Archive Would Like To Know What The Association Of American Publishers Is Hiding
Last year when a bunch of the biggest publishing houses sued the Internet Archive, in the midst of a pandemic, over their digital library program, I was a bit surprised that the announcement about the lawsuit came not from any of the publishers themselves directly, but rather from the Association of American Publishers (AAP), which is officially not a party in the lawsuit. That alone felt a bit... sketchy.And, now it may be an issue in the lawsuit itself. Last week, the Internet Archive asked the judge for a hearing because the AAP is attempting to withhold various responsive documents on the discovery requests that were made to the publishers themselves regarding their communications with the AAP, and a separate subpoena served on the AAP. And it appears the AAP really doesn't want that stuff to get into the hands of the Internet Archive's legal team.
Daily Deal: WriterDuet Screenwriting Pro Plan
When writing a script or story, your characters need obstacles. You don't. WriterDuet makes screenwriting effortless with all the professional tools and no learning curve. Use it to brainstorm, organize, and create mind maps and index cards within your project. Write with standard formats, production tools, backups, and import/export other filetypes. It’s built for collaboration with its real-time co-writing, intuitive commenting, and in-app text and video chat. It’s streamlined for creativity too. WriterDuet integrates and organizes your beat sheets, treatments and outlines so that you can focus on the rewriting that your story actually needs. This tool is cloud-based so you can work from anywhere, on any device with online-offline web, mobile, and desktop apps. Get a 3 year subscription for $100, and use the code SAVE15NOV to get an additional 15% off of this and other deals throughout the store.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Document Freed By FOIA Shows How Much Data The FBI Can Obtain From Cellphone Service Providers
An internal FBI document shared with Joseph Cox of Motherboard by Ryan Shapiro of Property of the People gives a little more insight into law enforcement's data grabs. The Third Party Doctrine -- ushered into law by the Supreme Court decision that said anything voluntarily shared with third parties could be obtained without a warrant -- still governs a lot of these collections.For everything else, there are warrant exceptions, plain view, inevitable discovery, a variety of "exigent circumstances," and reverse warrants that convert probable cause to "round up everyone and we'll decide who the 'usual suspects' are." Constitutional concerns still reside in this gray area, which means law enforcement will grab everything it can until precedent says it can't.The document [PDF] gives some insight into the FBI's CAST (Cellular Analysis Survey Team). It shows how much the FBI has access to, how much it has the potential to grab, and how much unsettled law aids in bulk collection of data the FBI can parse through to find suspects or, if enough fishing rods are present, decide whether it has anything to do with its investigative time.It's all in there, starting with "Basic Cellular Theory" and moving on to everything cell-related the FBI can get its data mitts on.
The GOP Is Mad Because New FCC Appointment Gigi Sohn Actually Defends Broadband Consumers
So after the longest (and completely unexplained) delay in FCC and NTIA history, last week the Biden administration finally got around to fully staffing the nation's top telecom regulator. While the selection of fairly centrist Jessica Rosenworcel is expected to make it through the confirmation process, the same can't be said of Gigi Sohn, a popular consumer advocate:
Nintendo's YouTube Video For Its Switch Online Upgrade Is Its Most Hated Video Ever
Well, this is moving fast. We had just been discussing Nintendo's announcement for a new tier of Nintendo Switch Online services. While there are several extras added in for the $50 per year tier, a 150% increase in cost from the base subscription, the real star of the show was supposed to be the Nintendo 64 games that are now included in it. As we discussed, however, the list of N64 games on offer is very limited and there are all kinds of problems with the games that are offered. Those problems include graphical issues, scaling issues, controller lag issues, controller mapping issues, and multiplayer lag. You know... everything. When you put all of that side by side with Nintendo's concentrated efforts to obliterate emulation sites from the internet, the end result is that Nintendo decided to deprive the public of pirated classic games in order to sell them a vastly inferior product.But it's one thing for me, known Nintendo-detractor Timothy Geigner, to say all of that. What really matters is how the paying public will react to all of this. Well, if you're looking for a canary in the Nintendo coal mine, we can look to the video Nintendo put on YouTube announcing the new tier of NSO.
Advertising Is Content: Taskmaster Edition
Many, many years ago on Techdirt, I wrote a lot about the idea of advertising being content (and content being advertising). The general idea was that, without captive audiences any more, you had to make your advertising into really good content that people would actually like, rather than find it annoying and intrusive.I still think this is an important insight, but with the rise of a limited number of internet giants and (more importantly) Google and Facebook focusing on better and better ad targeting, most of the focus on ads these days hasn't been so much on "advertising is content," so much as "advertising is creepily and slightly inaccurately targeted, but you're going to live with it, because that's all you've got." Still, every once in a while, we're reminded of this idea about how advertising could actually be good content in its own right. Ironically, the example I'm about to share here... comes from Google. But we'll get to that in a moment.In the midst of the pandemic, I discovered the amazing UK TV show Taskmaster, which is too good to describe. It's sort of a cross between a typical UK panel show, a game show with incredibly ridiculous tasks, and.... I dunno. Perhaps it's the anti-Squid Game. It does involve people playing games, but it's hilarious, not deadly. You kind of have to watch it to understand how good it is, and then you kind of can't stop watching it. Thankfully, the first eight seasons are fully and officially available on YouTube outside the UK. The show is now on Season 12, but it appears that they've stopped posting full copies of the new shows to YouTube -- perhaps because the show has become so popular they're looking for a licensing deal with some streaming service or something (their content is advertising!) For what it's worth, an attempt at a US spinoff version completely flopped because it was terrible, though other spinoffs, such as in New Zealand, have gone well. If you want to get a sense of the show, Season 1, Episode 1 is hard to beat, though it's missing some things that became standard in later seasons. If you want to watch the show once it really hit it's stride, seasons 4, 5 and 7 are probably the best.Anyway, while they're not posting full episodes any more, the Taskmaster YouTube page continues to post new content -- usually clips or outtakes from the show. But last week they also posted two ads. They're clearly labeled as ads -- but they're brand new Taskmaster content, advertising Google's Lens feature. They involve a couple of Taskmaster contestants competing in tasks that require the use of Google Lens to compete -- and they're just as entertaining as the show, while actually showing off this Google product I didn't even know existed. Since I've seen basically every available episode of Taskmaster, I thought this is a fantastic example of content as advertising, so I'm posting them here -- though I'll admit I'm not quite as sure how well they work for people who don't watch the show:I still think the advertising world would be better -- and less hated -- if there was a focus on making sure your advertising was actually good content that was entertaining or interesting. It may not be as exciting as trying to tweak the AI to squeeze an extra 0.000003 cents per user with more targeted ads, but it might make for a nicer world.
Google News Returning To Spain, As Awful 'Inalienable' Snippets Tax Is Replaced With Marginally Less Awful EU Copyright Directive
Back in 2014, Spain brought in a Google tax. It was even worse than Germany's, which was so unworkable that it was never applied fully. Spain's law was worse because it created a right for publishers to be paid by "news aggregators" that was "inalienable". That is, publishers could not waive that right -- they had to charge. That negated the point of Creative Commons licenses, which are designed to allow people to use material without paying. Subsequent research showed that Spain's snippet tax was a disaster for publishers, especially the smaller ones.Unsurprisingly, in response Google went for the nuclear option, and shut down Google News in Spain at the end of 2014. Seven years later -- a lifetime on the Internet -- Google News is returning to Spain:
Surprising, But Important: Facebook Sorta Shuts Down Its Face Recognition System
A month ago, I highlighted how Facebook seemed uniquely bad attaking a long term view and publicly committing to doing things that are good for the world, but bad for Facebook in the short run . So it was a bit surprising earlier this week to see Facebook (no I'm not calling it Meta, stop it) announce that it was shutting down its Face Recognition system and (importantly) deleting over a billion "face prints" that it had stored.The company's announcement on this was (surprisingly!) open about the various trade-offs here, both societally and for Facebook, though (somewhat amusingly) throughout the announcement Facebook repeatedly highlights the supposed societal benefits of its facial recognition.
Appeals Court Doesn't Seem To Like Much About A Criminal Defamation Law Police Used To Arrest A Critic
Three years ago, cops in New Hampshire arrested Robert Frese for the crime of… insulting some cops. Frese, facing a suspended sentence for smashing the window of a neighbor's car, left a comment on a local news site, claiming Exeter Police Chief William Shupe was a "coward" who was "covering for dirty cops."Instead of taking his online lumps like a true public servant, Shupe had Frese arrested, apparently hoping to use an outdated criminal defamation law to trigger Frese's suspended sentence to get him locked up for the next couple of years.That effort failed. The ACLU got involved, as did the state's Justice Department, which said Frese's comment was not unlawful. Shortly thereafter, the criminal defamation charge was dropped. A wrongful arrest lawsuit followed, netting Frese a $17,500 settlement. The police, of course, admitted no wrongdoing. Instead, they continued to claim the arrest was lawful and supported by a law that managed to make its way from the 15th century to the 21st century almost untouched.It isn't over yet. Frese, along with the ACLU, is still trying to get that law stricken from the books, hopefully in the form of a ruling finding it unconstitutional. Given what's happening during oral arguments in front of the First Circuit Court of Appeals, it looks like Frese may be on his way to victory. Here's Thomas Harrison of Courthouse News Service with more details.
Daily Deal: The Ultimate Learn to Code Bundle
The Ultimate Learn to Code Bundle has 80+ hours of immersive, multifaceted, programming education. When it comes to web programming, there are a lot of tools you can learn and use to make your workflow more efficient and your products more exciting. This bundle will give you a crash course into a variety of languages and tools, plus how to integrate them, giving you an excellent foundation for further learning. Courses cover Ruby on Rails 5, HTML5, CSS3, JavaScript, Python, iOS 10, and more. The bundle is on sale for $39. Use the coupon code SAVE15NOV and get an additional 15% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Whole YouTube Radicalizes People Story Doesn't Seem To Have Much Evidence To Back It Up
There seem to be a lot of "myths" about big internet companies that don't stand up to that much scrutiny, even as they're often accepted as common knowledge. There's the idea that Facebook's algorithm remains in place only because it makes Facebook more money (Facebook's own internal research suggests otherwise), or that disinformation goes viral on social media first (a detailed study showed cable news is a much bigger vector of virality).Another big one is that YouTube "radicalizes" people via its algorithm. There are lots of stories about how someone went on to YouTube to watch, like, video game clips, and within a week had become an alt-right edge lord troll shouting Trump slogans or whatever. Hell, this was a key plot point in the Social Dilemma, in which the young boy in the fictionalized sitcom family starts watching some videos on his phone, and a week later is participating in an extremist political rally that turns into a riot.However, a very thorough recent study (first highlighted by Ars Technica) found that there's really not much evidence to support any of this narrative. From the abstract:
Clearview Finally Submits AI For Independent Testing; Only Tests Feature It Isn't Actually Selling
At long last, Clearview has finally had its AI tested by an independent party. It has avoided doing this since its arrival on the facial recognition scene, apparently content to bolster its reputation by violating state privacy laws, making statements about law enforcement efficacy that are immediately rebutted by law enforcement agencies, and seeing nothing wrong with scraping the open web for personal information to sell to government agencies, retailers, and bored rich people.Kashmir Hill reports for the New York Times that Clearview joined the hundreds of other tech companies that have had their algorithms tested by the National Institute of Standards and Technology.
Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch
So, here's the thing: I get accused of picking on Nintendo a whole lot. But please know, it's not that I want to pick on them, it's just that they make it so damned easy to. I'm a golfer, okay? If I have a club in my hand and suddenly a ball on a tee appears before me, I'm going to hit that ball every time without hesitation. You will recall that a couple of years back, Nintendo opened up a new front on its constant IP wars by going after ROM and emulation sites. That caused plenty of sites to simply shut themselves down, but Nintendo also made a point of getting some scalps to hang on its belt, most famously in the form of RomUniverse. That site, which very clearly had infringing material not only on the site but promoted by the site's ownership, got slapped around in the courts to the tune of a huge judgement against, which the site owners simply cannot pay.But all of those are details and don't answer the real question: why did Nintendo do this? Well, as many expected from the beginning, it did this because the company was planning to release a series of classic consoles, namely the NES mini and SNES mini. But, of course, what about later consoles? Such as the Nintendo 64?Well, the answer to that is that Nintendo has offered a Nintendo Switch Online service uplift that includes some N64 games that you can play there instead.
UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments
Subjecting students to surveillance tech is nothing new. Most schools have had cameras installed for years. Moving students from desks to laptops allows schools to monitor internet use, even when students aren't on campus. Bringing police officers into schools to participate in disciplinary problems allows law enforcement agencies to utilize the same tech and analytics they deploy against the public at large. And if cameras are already in place, it's often trivial to add facial recognition features.The same tech that can keep kids from patronizing certain retailers is also being used to keep deadbeat kids from scoring free lunches. While some local governments in the United States are trying to limit the expansion of surveillance tech in their own jurisdictions, governments in the United Kingdom seem less concerned about the mission creep of surveillance technology.
Techdirt Podcast Episode 303: The Facebook Papers & The Media
The documents revealed by Facebook whistleblower Frances Haugen are full of important information — but the media hasn't been doing the best job of covering that information and all its nuances. There are plenty of examples of reporters taking one aspect out of context and presenting it in the worst possible light, while ignoring the full picture. This week, we're joined by law professor Kate Klonick to discuss the media's failings in covering the Facebook Papers, and the unwanted outcomes this could produce.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
The Scale Of Content Moderation Is Unfathomable
Sometimes it's difficult to get across to people "the scale" part when we talk about the impossibility of content moderation at scale. It's massive. And this is why whenever there's a content moderation decision that you dislike or that you disagree with, you have to realize that it's not personal. It wasn't done because someone doesn't like your politics. It wasn't done because of some crazy agenda. It was done because a combination of thousands of people around the globe and still sketchy artificial intelligence are making an insane number of decisions every day. And they just keep piling up and piling up and piling up.Evelyn Douek recently gave a (virtual) talk at Stanford on The Administrative State of Content Moderation, which is worth watching in its entirety. However, right at the beginning of her talk, she presented some stats that highlight the scale of the decision making here. Based on publicly revealed transparency reports from these companies, in just the 30 minutes allotted for her talk, Facebook would take down 615,417 pieces of content, YouTube would take down 271,440 videos, channels, and comments, and TikTok would take down 18,870 videos. And, also, the Oversight Board would receive 48 petitions to review a Facebook takedown decision.And, as she notes, that's only the take down decisions. It does not count the "leave up" decisions, which are also made quite frequently. Facebook is not targeting you personally. It is not Mark Zuckerberg sitting there saying "take this down." The company is taking down over a million pieces of content every freaking hour. It's going to make mistakes. And some of the decisions are ones that you're going to disagree with.And, to put that in perspective, she notes that in its entire history, the US Supreme Court has decided a grand total of approximately 246 1st Amendment cases, or somewhere around one per year. And, of course, in those cases, it often involves years of debates, and arguments, and briefings, and multiple levels of appeals. And sometimes the Supreme Court still gets it totally wrong. Yet we expect Facebook -- making over a million decisions to take content down every hour -- to somehow magically get it all right?Anyway, there's a lot more good stuff in the talk and I suggest you watch the whole thing to get a better understanding of the way content moderation actually works. It would be helpful for anyone who wants to opine on content moderation to not just understand what Douek is saying, but to really internalize it.
Latest Moral Panic: No, TikTok Probably Isn't Giving Teenage Girls Tourette Syndrome
If you recall, the U.S. spent much of 2020 freaking out about TikTok's threat to privacy, while oddly ignoring that the company's privacy practices are pretty much the international norm (and ignoring a whole lot of significantly worse online security and privacy problems we routinely do nothing about). More recently there was another moral panic over the idea that TikTok was turning children into immoral thieving hellspawn as part of the Devious licks meme challenge.Now, one initial report by the Wall Street Journal has alleged that teen girls are watching so many TikToks by other girls with Tourette Syndrome, they're developing tics. The idea that you can develop an entirely new neurological condition by watching short form videos sounds like quite a stretch, but the claim has already bounced around the news ecosystem for much of the year:
Daily Deal: TREBLAB Z2 Bluetooth 5.0 Noise-Cancelling Headphones
The Z2 headphones earned their name because they feature twice the sound, twice the battery life, and twice the convenience of competing headphones. This updated version of the original Z2s comes with a new all-black design and Bluetooth 5.0. Packed with TREBLAB's most advanced Sound2.0 technology with aptX and T-Quiet active noise-cancellation, these headphones deliver goose bump-inducing audio while drowning out unwanted background noise. These headphones are on sale for $79. We're having an early holiday sale this week, so use the code SAVE15NOV to get an additional 15% off of your purchase storewide.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Netflix Files Anti-Slapp Motion To Dismiss Lawsuit Claiming One Of Its Series Caused A Teen To Commit Suicide
Because Netflix is big, it draws lawsuits. It has been sued for defamation, copyright infringement, and, oddly, defamation via use of a private prison's logo in a fictional TV show. It has also been sued for supposedly contributing to a teen's suicide with its series "13 Reasons Why," which contained a lot of disturbing subject matter that teens deal with daily, like bullying, sexual assault, and -- most relevant here -- suicide. The final episode of the first season contained a suicide scene, one that was removed by Netflix two years after the show debuted.While undeniably a tragedy, the attempt to blame Netflix for this teen's suicide is severely misguided. The lawsuit filed by the teen's survivors alleges Netflix had a duty to warn viewers of the content (content warnings were added to the show a year after its release) and it failed to do so, making it indirectly liable for this death.Netflix is now trying to get this lawsuit dismissed using California's anti-SLAPP law because, as it argues persuasively, this is all about protected speech, no matter how the plaintiffs try to portray it as a consumer protection issue. (h/t Reason)Netflix's anti-SLAPP motion [PDF] points out this isn't the first time teen suicide has been depicted in pop culture, nor is it the first time people have tried to sue creators over the content of their creations. None of those lawsuits have been successful.
The Internet Is Not Facebook; Regulating It As If It Were Will Fuck Things Up
I've mocked the NY Times for its repeated failures to understand basic facts about internet regulations such as Section 230 -- but the organization also deserves credit when it gets things (mostly) right. Last week, Farhad Manjoo wrote up a great opinion piece noting that, even if you agree that Facebook is bad, most regulatory proposals would make things much, much worse.He focuses on the blatantly unconstitutional "Health Misinformation Act" from Senators Klobuchar and Lujan, which would appoint a government official to declare what counts as health misinformation, and then remove Section 230 protections from any website that has such content. As Manjoo rightly notes, it's as if everyone has forgotten who was President from 2017 to early 2021 and hasn't considered what he or someone like him would do with such powers:
Publishers Want To Make Ebooks More Expensive And Harder To Lend For Libraries; Ron Wyden And Anna Eshoo Have Questions
Techdirt has noted in the past that if public libraries didn't exist, the copyright industry would never allow them to be created. Publishers can't go back in time to change history (fortunately). But the COVID pandemic, which largely stopped people borrowing physical books, presented publishers with a huge opportunity to make the lending of newly-popular ebooks by libraries as hard as possible.A UK campaign to fight that development in the world of academic publishing, called #ebookSOS, spells out the problems. Ebooks are frequently unavailable to institutions to license as ebooks. When they are on offer, they can be ten or more times the cost of the same paper book. The #ebookSOS campaign has put together a spreadsheet listing dozens of named examples. One title cost £29.99 as a physical book, and £1,306.32 for a single-user ebook license. As if those prices weren't high enough, it's common for publishers to raise the cost with no warning, and to withdraw ebook licenses already purchased. One of the worst aspects is the following:
Lessons From The First Internet Ages
On Tuesday and Wednesday of this week I'm excited to be participating in an event that the Knight Foundation is putting on, curated by law professors Eric Goldman and Mary Anne Franks, entitled Lessons From the First Internet Ages. The event kicks off with the release of reflections on "the first internet age" from various internet luminaries who were there -- but also, most importantly talking about what they might have done differently. I'm going to have a writeup at some future date on my response to the pieces, but I highly recommend checking them all out. In particular, I'll recommend the pieces by Senator Ron Wyden, Nicole Wong, Brewster Kahle, Vint Cerf, Reid Hoffman, and Tim Berners-Lee. I also think that the interviews Eric Goldman conducted with Matthew Prince and Nirav Tolia were both fascinating.Just to give you a snippet, Wyden's article really is excellent:
Hawaii School, Police Department On The Verge Of Being Sued For Arresting A Ten-Year-Old Girl Over A Drawing
Putting cops in schools is never a good idea. It only encourages school administrators to hand over discipline problems to the "proper authorities," which is what administrators used to be until the addition of law enforcement on campus.Having cops on tap also appears to encourage parents to demand a law enforcement response to disciplinary problems. That's what happened at a school in Hawaii, where a 10-year-old student was arrested over a drawing another student's parent didn't like. The school -- and the police department that performed the arrest -- are on the verge of being sued by the student and the ACLU.Here's a brief summary of the incident from the ACLU:
Because Of Course: Trump's SPAC Deal May Have Broken The Law
If you thought that Trump's new Truth Social website's potential legal problems with its apparent failure to abide by the license on the open source code it seems to be using would be the worst legal problems facing the site, well, you underestimated The Donald. There's been plenty of talk about the SPAC deal that valued the company at billions of dollars through one of those reverse merger IPOs. But, now the NY Times is reporting that the way the deal was done may have violated securities laws. So on brand.
California Prosecutors Are Still Trying To Get Signal To Hand Over User Info It Simply Doesn't Possess
Encrypted messaging app Signal is slowly educating federal prosecutors on the meaning of the idiom "blood from a stone." Usually this refers to someone who is judgment-proof (or extortion-proof or whatever), since you can't take money a person doesn't have.This would be the digital equivalent. Prosecutors in California have tried three times this year to obtain data on Signal users that Signal never collects or retains. Issue all the subpoenas you want, Signal says, but don't expect anything to change. We can't give you what we don't have. (h/t Slashdot)
Daily Deal: The 2021 Complete Video Production Super Bundle
The 2021 Complete Video Production Super Bundle has 10 courses to help you learn all about video production. Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about this in-depth video production bundle. Video content is fast changing from the future marketing tool to the present, and here you'll learn how to make professional videos on any budget. From the absolute basics, to screenwrighting to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high quality video content. You'll learn how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Forget 'The Kids These Days'; It's The Adults And Their Moral Panics To Worry About
A recent episode of the Reply All podcast, Absolutely Devious Lick, touched on a bunch of interesting points regarding the never-ending debates about social media, content moderation, and how it's supposedly damaging the kids these days. It's worth listening to the entire episode, but it begins by talking about a very slightly viral TikTok "challenge" which became known as Devious Licks -- lick being slang for something you stole. It started with a kid putting up a TikTok video of him holding a box of disposable masks, suggesting that he had stolen it from the school. Because school kids sometimes do stupid things to copy their stupid friends, a few others posted similar videos, including one early one of a kid taking a soap dispenser. And then there were some stories of it spreading and people going more extreme, because, you know, kids. But it didn't seem to spread that far initially.But, of course, the thing became a lot more viral after mainstream media jumped on it with their typical "OMG, the kids these days" kind of coverage, starting with the New York Times, CNN, USA Today and then like every random local news jumping on the trend to tsk tsk about the kids these days.Prominent grandstanding Senator, Richard Blumenthal called on TikTok execs to testify over all of this, which turned into another ridiculous Senate hearing in which old men yell at social media execs about how they're harming kids.But, scratch the surface a little, and beyond a few dumb kids, this seems a lot more like adults over-reacting and freaking out, and making the story go much, much, much more viral than it did in reality. Indeed, the only news organization I've seen that recognized that most of this was a moral panic by adults was Curbed, which noted that, yes, there was some actual vandalism done by kids, but a lot of it seemed to be kids mocking the trend as well:
It's Ridiculous The 'Developing World' Wasn't Given Access To The Facebook Files
By now it's fairly clear the Facebook leaks showcase a company that prioritized near-mindless international growth over the warnings of their own experts. They also show a company that continues to painfully struggle to be even marginally competent at scale, whether we're talking about content moderation or rudimentary customer service. While this has become an all-encompassing media spectacle, the real underlying story isn't particularly unique. It's just a "growth for growth's sake" mindset, where profit and expansion trumped all reason. It just happens to be online, and at unprecedented international scale.One thing I haven't seen talked a lot about is the fact that if you look back a few years, an awful lot of of folks in developing nations saw these problems coming a mile away, long before their Western counterparts. For a decade, international activists warned repeatedly about the perils of Facebook's total failure to understand the culture/regulations/language/norms of the countries they rapidly flooded into. Yet bizarrely, Frances Haugen's PR team somehow excluded most of these countries when it came time to recently release access to the Facebook files:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is That Anonymous Coward with a comment about the baffling valuation of Trump's broken social media venture:
This Week In Techdirt History: October 24th - 30th
Five Years AgoThis week in 2016, the American Bar Association prepared a report on Trump's libel bully behavior, but was scared out of publishing it... for fear of being sued by Trump. Meanwhile, the Clinton campaign was trying to deny the authenticity of the emails released by Wikileaks. The ACLU was taking the government to court over unreleased FISA opinions, new documents revealed how AT&T hoped to profit by helping law enforcement spy on the public, and Yahoo was trying to get permission to talk about the email scanning it did for the government. Via a FOIA request from the EFF, we learned more about why the copyright office misrepresented copyright law to the FCC — unsurprisingly, it was at the behest of the MPAA, which began trying to mock EFF over the story. Also, this was the week that Oracle officially appealed Google's fair use win over API copyright.Ten Years AgoThis week in 2011, Apple was continuing its trademark war against anyone using an apple in a logo, Universal was using copyright to go after parodies, and we learned that ICE seized 20 domain names for the NFL over the weekend. The House of Representatives was rushing out its version of PROTECT IP, which emerged with the even more ridiculous name of the E-PARASITES Act — and it was really, really bad and required some interesting flip-flopping on behalf of its sponsors. Amidst this, we published a three-part series on the many historical "killers" of the movie industry (part one, part two, part three).Fifteen Years AgoThis week in 2006, we looked at how the DMCA takedown process was working for the now-Google-owned YouTube, and at the way many of the weak lawsuits against Google (and not just for YouTube) were strengthening the company's position by giving it easy wins. We also looked at how copy protection and walled gardens were making music annoying for consumers while, amidst lots of fearmongering about the internet hurting music sales, Weird Al was crediting it with his album's success. We saw another way to rebuff the RIAA's lawsuits — by hiring a lawyer that has beaten them before — while the association also failed in its attempts to legally scour the hard drives of its lawsuit targets. Meanwhile, a newspaper was called out by a lawyer for thinking it could unilaterally tell people that "fair use is not applicable" to uses of its content, leading the publication to change the language on its site — but when the lawyer who called them out sent a thank you note, they threatened to sue him for defamation.
Disbarment Proceedings Show How A Maryland Prosecutor Covered Up An FBI Agent's Lies For More Than Twenty Years
A recent sanctions case against a Maryland prosecutor -- one involving a murder case and the use of crime scene forensic "science" -- highlights the real world effects of the FBI's tendency to overstate the certainty of forensic findings in court. It also highlights another long-running problem in the justice system: the withholding of exculpatory evidence by prosecutors who seem willing to take any "win," whether it's earned or not. (h/t Steve Klepper)The sanctions order [PDF] recounts the case, which dates back to 1981. Joseph Cassilly was the prosecutor that handled the case of Diane Becker, who was found murdered in her trailer. Her boyfriend, Joseph Hudson, was found dead on a nearby road. He had been shot several times.There were two suspects: Deno Kanaras and John Huffington. Both were indicted for the murder. Kanaras admitted to being present when the murder occurred, but claimed Huffington killed the two people. Kanaras testified against Huffington and Huffington was convicted on two counts of felony murder in 1982. He appealed and his conviction was reversed.Huffington was tried again in 1983. Kanaras was, again, the only eyewitness and testified against Huffington. By the time this trial occurred, Kanaras had already been convicted of Becker's murder. This time around, the prosecution brought in an FBI agent to testify, Michael Malone. Attempting to prove Huffington was at the scene of Becker's murder, Agent Malone offered this testimony:
The Faintest Hint Of Regulatory Accountability Has Tesla Acting Like An Adult
Coming from telecom, I'm painfully aware of the perils of the "deregulation is a panacea" mindset. For literally thirty straight years, the idea that deregulation results in some kind of miraculous Utopia informed U.S. telecom policy, resulting in a sector that was increasingly consolidated and uncompetitive. In short, the entirety of U.S. telecom policy (with the short lived sporadic exception) has been to kowtow to regional telecom monopolies. Efforts to do absolutely anything other than that (see: net neutrality, privacy, etc.) are met with immeasurable hyperventilation and predictions of imminent doom.So I think the U.S. telecom sector holds some valuable lessons in terms of regulatory competency and accountability. No, you don't want regulators that are heavy-handed incompetents. And yes, sometimes deregulation can help improve already competitive markets (which telecom most certainly isn't). At the same time, you don't want regulators who are mindless pushovers, where companies are keenly aware they face zero repercussions for actively harming consumers, public safety, or the health of a specific market.Enter Tesla, which is finally facing something vaguely resembling regulatory scrutiny for its bungled and falsehood-filled deployment of "full self-driving" technology. As crashes and criticism pile up, Tesla is arguably facing its first ever instance of regulatory accountability in the face of more competent government hires and an ongoing investigation into the company's claims by the NHTSA. This all might result in no meaningful or competent regulatory action, but the fact that people aren't sure of that fact is still a notable sea change.This, in turn, has automatically resulted in a new tone at Tesla that more reflects a company run by actual adults:
As Prudes Drive Social Media Takedowns, Museums Embrace... OnlyFans?
Over the last few years, we've seen more and more focus on using content moderation efforts to stamp out anything even remotely upsetting to certain loud interest groups. In particular, we've seen NCOSE, formerly "Morality in Media," spending the past few years whipping up a frenzy about "pornography" online. They were one of the key campaigners for FOSTA, which they flat out admitted was step one in their plan to ban all pornography online. Recently, we've discussed how MasterCard had put in place ridiculous new rules that were making life difficult for tons of websites. Some of the websites noted that Mastercard told them it was taking direction from... NCOSE. Perhaps not surprisingly, just recently, NCOSE gave MasterCard its "Corporate Leadership Award" and praised the company for cracking down on pornography (which NCOSE considers the same as sex trafficking or child sexual abuse).Of course, all of this has some real world impact. We've talked about how eBay, pressured to remove such content because of FOSTA and its payment processors, has been erasing LGBTQ history (something, it seems, NCOSE is happy about). And, of course, just recently, OnlyFans came close to prohibiting all sexually explicit material following threats from its financial partners -- only to eventually work out a deal to make sure it could continue hosting adult content.But all of this online prudishness has other consequences. Scott Nover, over at Quartz, has an amazing story about how museums in Vienna are finding that images of classic paintings are being removed from all over the internet. Though, they've come up with a somewhat creative (and surprising) solution: the museums are setting up OnlyFans accounts, since the company is one of the remaining few which is able to post nude images without running afoul of content moderation rules. Incredibly, the effort is being run by Vienna's Tourist Board.
Swiss Court Says ProtonMail Isn't A Telecom, Isn't Obligated To Retain Data On Users
ProtonMail offers encrypted email, something that suggests it's more privacy conscious than others operating in the same arena. But, being located in Switzerland, it's subject to that country's laws. That has caused some friction between its privacy protection claims and its obligations to the Swiss government, which, earlier this year, rubbed French activists the wrong way when their IP addresses were handed over to French authorities.The problem here wasn't necessarily the compliance with local laws. It was Proton's claim that it did not retain this information. If it truly didn't, it would not have been able to comply with this request. But it is required by local law to retain a certain amount of information. This incident coming to light resulted in ProtonMail altering the wording on its site to reflect this fact. It no longer claimed it did not retain this info. The new statement merely says this info "belongs" to users and Proton's encryption ensures it won't end up in the hands of advertisers.Proton's retention of this data was the result of a Swiss data retention law and, more recently, a revocation of its ability to operate largely outside the confines of this law. Terry Ang of Jurist explains the how and why behind Proton's relinquishment of IP addresses to French authorities, which resulted in its challenge of the applicability of the local data retention law.
Everything You Know About Section 230 Is Wrong (But Why?)
There are a few useful phrases that allow one instantly to classify a statement. For example, if any piece of popular health advice contains the word "toxins," you can probably disregard it. Other than, "avoid ingesting them." Another such heuristic is that if someone tells you "I just read something about §230..." the smart bet is to respond, "you were probably misinformed." That heuristic can be wrong, of course. Yet in the case of §230 of the Communications Decency Act, which has been much in the news recently, the proportion of error to truth is so remarkable that it begs us to ask, "Why?" Why do reputable newspapers, columnists, smart op-ed writers, legally trained politicians, even law professors, spout such drivel about this short, simple law?§230 governs important aspects of the liability of online platforms for the speech made by those who post on them. We have had multiple reasons recently to think hard about online platforms, about their role in our politics, our speech, and our privacy. §230 has figured prominently in this debate. It has been denounced, blamed for the internet's dysfunction, and credited with its vibrancy. Proposals to repeal it or drastically reform it have been darlings of both left and right. Indeed, both former President Trump and President Biden have called for its repeal. But do we know what it actually does? Here's your quick quiz: Can you tell truth from falsity in the statements below? I am interested in two things. Which of these claims do you believe to be true, or at least plausible? How many of them have you heard or seen?The §230 Quiz: Which of These Statements is True? Pick all that apply.A.) §230 is the reason there is still hate speech on the internet. The New York Times told its readers the reason "why hate speech on the internet is a never-ending problem," is "because this law protects it." quoting the salient text of §230.B.) §230 forbids, or at least disincentivizes, companies from moderating content online, because any such moderation would make them potentially liable. For example, a Wired cover story claimed that Facebook had failed to police harmful content on its platform, partly because it faced "the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand."C.) The protections of §230 are only available to companies that engage in "neutral" content moderation. Senator Cruz, for example, in cross examining Mark Zuckerberg said, "The predicate for Section 230 immunity under the CDA is that you're a neutral public forum. Do you consider yourself a neutral public forum?"D.) §230 is responsible for cyberbullying, online criminal threats and internet trolls. It also protects against liability when platforms are used to spread obscenity, child pornography or for other criminal purposes. A lengthy 60 Minutes program in January of this year argued that the reason that hurtful, harmful and outright illegal content stays online is the existence of §230 and the immunity it grants to platforms. Other commentators have blamed §230 for the spread of everything from child porn to sexual trafficking.E.) The repeal of §230 would lead online platforms to police themselves to remove hate speech and libel from their platforms because of the threat of liability. For example, as Joe Nocera argues in Bloomberg, if §230 were repealed companies would "quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn't be able to hurl anti-Semitic slurs."F.) §230 is unconstitutional, or at least constitutionally problematic, as a speech regulation in possible violation of the First Amendment. Professor Philip Hamburger made this claim in the pages of the Wall Street Journal, arguing that the statute is a speech regulation that was passed pursuant to the Commerce Clause and that "[this] expansion of the commerce power endangers Americans' liberty to speak and publish." Professor Jed Rubenfeld, also in the Wall Street Journal, argues that the statute is an unconstitutional attempt by the state to allow private parties to do what it could not do itself — because §230 "not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so."What were your responses to the quiz? My guess is that you've seen some of these claims and find plausible at least one or two. Which is a shame because they are all false, or at least wildly implausible. Some of them are actually the opposite of the truth. For example, take B.) §230 was created to encourage online content moderation. The law before §230 made companies liable when they acted more like publishers than mere distributors, encouraging a strictly hands-off approach. Others are simply incorrect. §230 does not require neutral content moderation — whatever that would mean. In fact, it gives platforms the leeway to impose their own standards; only allowing scholarly commentary, or opening the doors to a free-for-all. Forbidding or allowing bawdy content. Requiring identification of posters or allowing anonymity. Filtering by preferred ideology, or religious position. Removing posts by liberals or conservatives or both.What about hate speech? You may be happy or sad about this but, in most cases, saying bad things about groups of people, whether identified by gender, race, religion, sexual orientation or political affiliation, is legally protected in the United States. Not by §230, but by the First Amendment to the US Constitution. Criminal behavior? §230 has an explicit exception saying it does not apply to liability for obscenity, the sexual exploitation of children or violation of other Federal criminal statutes. As for the claim that "repeal would encourage more moderation by platforms," in many cases it has things backwards, as we will see.Finally, unconstitutional censorship? Private parties have always been able to "censor" speech by not printing it in their newspapers, removing it from their community bulletin boards, choosing which canvassers or political mobilizers to talk to, or just shutting their doors. They are private actors to whom the First Amendment does not apply. (Looking at you, Senator Hawley.) All §230 does is say that the moderator of community bulletin board isn't liable when the crazy person puts up a libelous note about a neighbor, but also isn't liable for being "non neutral" when she takes down that note, and leaves up the one advertising free eggs. If the law says explicitly that she is neither responsible for what's posted on the board by others, nor for her actions in moderating the board, is the government enlisting her in pernicious, pro-egg state censorship in violation of the First Amendment?! "Big Ovum is Watching You!"? To ask the question is to answer it. Now admittedly, these are really huge bulletin boards! Does that make a difference? Perhaps we should decide that it does and change the law. But we will probably do so better and with a clearer purpose if we know what the law actually says now.It is time to go back to basics. §230 does two simple things. Platforms are not responsible for what their posters put up, but they are also not liable when they moderate those postings, removing the ones that break their guidelines or that they find objectionable for any reason whatsoever. Let us take them in turn.1.) It says platforms, big and small, are not liable for what their posters put up, That means that social media, as you know it — in all its glory (Whistleblowers! Dissent! Speaking truth to power!) and vileness (See the internet generally) — gets to exist as a conduit for speech. (§230 does not protect platforms or users if they are spreading child porn, obscenity or breaking other Federal criminal statutes.) It also protects you as a user when you repost something from somewhere else. This is worth repeating. §230 protects individuals. Think of the person who innocently retweets, or reposts, a video or message containing false claims; for example, a #MeToo, #BLM or #Stopthesteal accusation that turns out to be false or even defamatory. Under traditional defamation law, a person republishing defamatory content is liable to the same extent as the original speaker. §230 changes that rule. Perhaps that is good or perhaps that is bad — but think about what the world of online protest would be like without it. #MeToo would become… #Me? #MeMaybe? #MeAllegedly? Even assuming that the original poster could find a platform to post that first explosive accusation on. Without §230, would they? As a society we might end up thinking that the price of ending that safe harbor was worth it, though I don't think so. At the very least, we should know how big the bill is before choosing to pay it.2.) It says platforms are not liable for attempting to moderate postings, including moderating in non-neutral ways. The law was created because, before its passage, platforms faced a Catch 22. They could leave their spaces unmoderated and face a flood of rude, defamatory, libelous, hateful or merely poorly reasoned postings. Alternatively, they could moderate them and see the law (sometimes) treat them as "publishers" rather than mere conduits or distributors. The New York Times is responsible for libelous comments made in its pages, even if penned by others. The truck firm that hauled the actual papers around the country (how quaint) is not.So what happens if we merely repeal §230? A lot of platforms that now moderate content extensively for violence, nudity, hate speech, intolerance, and apparently libelous statements would simply stop doing so. You think the internet is a cesspit now? What about Mr. Nocera's claim that they would immediately have to tweak their algorithms or face liability for anti-Semitic postings? First, platforms might well be protected if they were totally hands-off. What incentive would they have to moderate? Second, saying hateful things, including anti-Semitic ones, does not automatically subject one to liability; indeed, such statements are often protected from legal regulation by the First Amendment. Mr. Nocera is flatly wrong. Neither the platform nor the original poster would face liability for slurs, and in the absence of §230, many platforms would stop moderating them. Marjorie Taylor Greene's "Jewish space-laser" comments manage to be both horrifyingly anti-Semitic and stupidly absurd at the same time. But they are not illegal. As for libel, the hands-off platform could claim to be a mere conduit. Perhaps the courts would buy that claim and perhaps not. One thing is certain, the removal of §230 would give platforms plausible reasons not to moderate content.Sadly, this pattern of errors has been pointed out before. In fact, I am drawing heavily and gratefully on examples of misstatements analyzed by tech commentators and public intellectuals, particularly Mike Masnick, whose page on the subject has rightly achieved internet-law fame. I am also indebted to legal scholars such as Daphne Keller, Jeff Kosseff and many more, who play an apparently endless game of Whack-a-Mole with each new misrepresentation. For example, they and people like them eventually got the New York Times to retract the ludicrous claim featured above. That story got modified. But ten others take its place. I say an "endless game of Whack-a-Mole" without hyperbole. I could easily have cited five more examples of each error. But all of this begs the question. Why? Rather than fight this one falsehood at a time, ask instead, "why is 'respectable' public discourse on this vital piece of legislation so wrong?"I am a law professor, which means I am no stranger to mystifying error. It appears to be an endlessly renewable resource. But at first, this one had me stumped. Of course, some of the reasons are obvious.
Daily Deal: The 2021 All-in-One Computer Science Bundle
The 2021 All-in-One Computer Science Bundle has 11 courses to teach you the essentials of computer science. You'll learn about Java, C++, Ruby on Rails, Python, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
NY Times Continues Its Inability To Report Accurately On Section 230 And Content Moderation
Daisuke Wakabayashi is a NY Times business reporter who seems to have a weird blind spot regarding Section 230 and online content moderation. Actually, perhaps "blind spot" isn't the right term for it. Two years ago, he was responsible for the massive full page, front page of the Business Section article falsely claiming that Section 230 was responsible for hate speech online. That's the one* where, infamously, the NY Times had to write a correction that completely undermined the headline of the article:
Tired Of Federal Apathy, Oakland Moves To Ban Anticompetitive Broadband Landlord Deals
We've noted for years how corruption and apathy have resulted in the U.S. broadband sector being heavily monopolized, resulting in 83 million Americans having the choice of just one ISP. Tens of millions more Americans only have the choice of their local cable company or an apathetic local phone company that hasn't meaningfully upgraded their aging DSL lines in twenty years. On top of that problem is another problem: ISPs routinely bribe or bully apartment, condo, and other real estate owners into providing them cozy exclusivity arrangements that block broadband competition on a block by block level as well.While the FCC tried to ban such landlord/monopoly ISP shenanigans back in 2006, the rules were poorly crafted. As a result, this stuff still routinely happens, it's just called...something else (Susan Crawford wrote the definitive piece on this for Wired a few years back).For example ISPs will still strike deals with landlords banning any other ISP from advertising in the building. Sometimes landlords will still block competitor access to buildings entirely. Or they'll charge building access fees that unfairly penalize smaller competitors that may not be able to afford them. Or, because the rules prohibit ISPs from blocking access to an ISP's in building wiring, they'll just lease these building lines to the landlord, who'll then block access to competitors on behalf of the monopoly ISP (because technically the landlord now owns them). It's just noxious, weedy bullshit, and it's been going on for decades.While the FCC has recently made a little noise about revisiting the subject, any policymaking there could take years to sluggishly materialize. Like most broadband reform, feckless federal leadership has driven reform to take place at a faster cadence on the local level. In Oakland, for example, the city council just voted to effectively eliminate all landlord/ISP anticompetitive shenanigans to encourage broadband competition:
...150151152153154155156157158159...