Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-19 08:46
Daily Deal: The Dynamic 2021 DevOps Training Bundle
Most software companies today employ extensive DevOps staff and engineers are in constant demand. In the Dynamic 2021 DevOps Training Bundle, Certs-School provides you with 5 courses to introduce you to the DevOps field, improve your skills, and then later excel as an actual practitioner. You will be introduced to DevOps tools and methodologies, GIT, CompTIA Cloud, Docker, and Ansible. Each course is self-paced so you can learn in your own time. It's on sale for $60.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Criminalizing Teens' Google Searches Is Just How The UK's Anti-Cybercrime Programs Roll
Governments sure seem to hate online advertisers and the platforms that profit from targeted advertising and tailored content algorithms. But they don't -- at least in this case -- have anything against engaging in exactly this sort of behavior if it helps them achieve their ends.In 2015, UK's National Crime Agency started a program called Cyber Choices, which was meant to steer young people away from being malicious hackers. Starting with the assumption that any form of hacking would ultimately result in malicious hacking, the NCA hoped to engage in interventions that would redirect this apparently unguided energy into something more productive and less harmful.The NCA's insistent belief that children are our (grimdark) future if left unattended, it started making stupid assertions, like claiming modding videogames was the gateway drug for black hat hackers. To steer curious youngsters away from malicious hacking, the NCA got into the targeted advertising business.
Canon Sued For Disabling Printer Scanners When Devices Run Out Of Ink
For more than a decade now, computer printer manufacturers have been engaged in an endless quest called: "let's be as annoying as humanly possible." That quest, driven by a desire to monopolize and boost the sale of their own printer cartridges, has resulted in all manner of obnoxious DRM and other restrictions designed to make using cheaper, third-party printing cartridges a monumental headache. Often, software or firmware updates have been designed to intentionally grind printing to a halt if you try to use these alternative options.Beyond that, there are other things printer manufacturers do that make even less sense if a happy customer is the end goal. Take for example Canon's history of disabling the scanning and faxing functionality on some printer models if the printer itself runs out of ink. It's a policy designed to speed up the rate at which users buy expensive cartridges (god forbid you go a few months just scanning things without adequate levels of magenta), but it's exemplary of the outright hostility that plagues the sector.And now Canon is facing a $5 million lawsuit (pdf) for its behavior. The lawsuit, filed in the District Court for the Eastern District of New York (first spotted by Bleeping Computer) claims Canon fails to adequately disclose the restrictions to consumers:
Copyright Law Discriminating Against The Blind Finally Struck Down By Court In South Africa
Most people would agree that those who are blind or visually impaired deserve all the help they can get. For example, the conversion of printed materials to accessible formats like Braille, large print, or Digitally Accessible Information System (DAISY) formats, ought to be easy. Who could possibly object? For years, many publishers did; and the reason – of course – is copyright. For example, publishers refused to allow Braille and other accessible editions to be shared between different countries:
LAPD Sees Your Reform Efforts, Raises You $20 Million In Bullets, Snacks, And Surveillance
The Los Angeles Police Department is reform-resistant. This isn't the same as reform-proof, but more separates "resistant" from "proof" in this case than the misleading labels promising varying degrees of water resistance placed on watches and cellphones.The LAPD has endured decades of bad press with barely an uptick in performance or community orientation. The LAPD is best known for beating minorities until riots happen. With a wave of police reform efforts sweeping the nation -- many of them looking to spend less on police violence and more on things that actually help the community -- the LAPD has issued a tone-deaf demand for more money to spend on things residents are complaining about.
Study Shows How Android Phones Still Track Users, Even When 'Opted Out'
We've frequently noted that what's often presented as "improved privacy" is usually privacy theater. For example researchers just got done showing how Apple's heavily hyped "do not track" button doesn't actually do what it claims to do, and numerous apps can still collect an parade of different data points on users who believe they've opted out of such collection. And Apple's considered among the better companies when it comes to privacy promises.Android is notably worse. One of my favorite privacy and adtech reporters is Shoshana Wodinsky, because she'll genuinely focus on the actual reality, not the promises. This week she wrote about how researchers at Trinity College in Dublin took a closer look at Android privacy, only to find that the term "opting out" often means absolutely nothing:
Court Tells Arkansas Troopers That Muting Anti-Cop Terms On Its Facebook Page Violates The 1st Amendment
When government entities use private companies to interact with the public, it can cause some confusion. Fortunately, this isn't a new problem with no court precedent and/or legal guidelines. For years, government agencies have been utilizing Twitter, Facebook, Instagram, etc. to get their message out to the public and (a bit less frequently) listen to their comments and complaints.Platforms can moderate content posted to accounts and pages run by public entities without troubling the First Amendment. Government account holders can do the same thing, but the rules aren't exactly the same. There are limits to what content moderation they can engage in on their own. A case involving former president Donald Trump's blocking of critics resulted in an Appeals Court decision that said this was censorship -- a form of viewpoint discrimination that violated these citizens' First Amendment rights.A decision [PDF] from a federal court in Arkansas arrives at the same conclusion, finding that a page run by local law enforcement engaged in unlawful viewpoint discrimination when it blocked a Facebook user and created its own blocklist of words to moderate comments on its page. (h/t Volokh Conspiracy)This case actually went in front of a jury, which made a couple of key determinations on First and Fourth Amendment issues. The federal court takes it from there to make it clear what government agencies can and can't do when running official social media accounts.Plaintiff James Tanner commented on the Arkansas State Police's Facebook page with a generic "this guy sucks" in response to news about the promotion of a state trooper. That post was removed -- then reinstated -- by the State Police.While that may have been a (temporary) First Amendment violation, the court says this act alone would not create a chilling effect, especially in light of the comment's reinstatement shortly after its deletion.However, the State Police took more action after Tanner contacted the page via direct message with messages that were far more direct. In response to the State Police's threat to ban him if he used any more profanity in his comments, Tanner stated: "Go Fuck Yourself Facist Pig." For that private message -- seen by no one but Tanner and Captain Kennedy, who handled moderation of the State Police page -- Tanner was blocked. Kennedy compared the block of Tanner as the equivalent of "hanging up" on a rude caller.The court disagrees. It's not quite the same thing. "Hanging up" on someone terminates a single conversation. What happened here was more analogous to subjecting Tanner to a restraining order that forbade him from speaking to state troopers or about them.
New Research Shows Social Media Doesn't Turn People Into Assholes (They Already Were), And Everyone's Wrong About Echo Chambers
We recently wrote about Joe Bernstein's excellent Harper's cover story, which argues that we're all looking at disinformation/misinformation the wrong way, and that the evidence of disinformation on social media really influencing people is greatly lacking. Instead, as Bernstein notes, this idea is one that many others are heavily invested in spreading, including Facebook (if the disinfo story is true, then you should buy ads on Facebook to influence people in other ways), the traditional media (social media is a competitor), and certain institutions with a history of having authority over "truth" (can't let the riffraff make up their own minds on things).We've also seen other evidence pop up questioning the supposed malicious impact of social media. Yochai Benkler's work has shown that Fox News has way more of an impact on spreading false information than social media does.And even with all this evidence regarding disinformation, there are also people who focus on attitude, and insist that social media is responsible for otherwise good people turning awful. Yet, as was covered in an fascinating On the Media interview with Professor Michael Bang Petersen, there really isn't much evidence to support that either! As Petersen explained in a useful Twitter thread, his research has shown that there's no real evidence to support the idea that social media turns people hostile. Instead, it shows that people who are assholes offline are also assholes online.But in the interview, Petersen makes a really fascinating point regarding echo chambers. I've been skeptical about idea of online echo chambers in the past, but Petersen says that people really have it all backwards -- and that we're actually much more likely to live in echo chambers offline than online, and we're much more likely to come across different viewpoints online.
Daily Deal: The 2021 Complete Video Production Super Bundle
The 2021 Complete Video Production Super Bundle has 10 courses to help you learn all about video production. Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about this in-depth video production bundle. Video content is fast changing from the future marketing tool to the present, and here you'll learn how to make professional videos on any budget. From the absolute basics, to screenwrighting to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high quality video content. You'll learn how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Apple Gives Chinese Government What It Wants (Again); Pulls Quran App From Chinese App Store
Apple has generally been pretty good about protecting users from government overreach, its recent voluntary (and misguided) foray into client-side scanning of users' images notwithstanding. But that seemingly only applies here in the United States, which is going to continue to pose problems for Apple if it chooses to combat local overreach while giving foreign, far more censorial governments greater and greater control.Like many other tech companies, Apple has no desire to lose access to one of the largest groups of potential customers in the world. Hence its deference to China, which has seen the company do things like pull the New York Times app in China following the government's obviously bullshit claim that the paper was a purveyor of "fake news."Since then, Apple has developed an even closer relationship with the Chinese government, which culminated in the company opening data centers in China to comply with the government's mandate that all foreign companies store Chinese citizens' data locally where it's much easier for the government to demand access.On a smaller scale, Apple pulled another app -- one that encrypted text messages on platforms that don't provide their own encryption -- in response to government demands. Once again, Apple left Chinese citizens at the mercy of their government, apparently in exchange for the privilege of selling them devices that promised them security and privacy while actually offering very little of either.The latest acquiescence by Apple will help the Chinese government continue its oppression of the country's Uighur minority -- Muslim adherents that have been subjected to religious persecution for years. Whoever the government doesn't cage, disappear, or genocide into nonexistence will see nothing but the bottom of a jackboot for years to come. Apple is aiding and abetting the jackboot, according to this report by the BBC.
Many Digital Divide 'Solutions' Make Privacy And Trust A Luxury Option
We've noted a few times how privacy is slowly but surely becoming a luxury good. Take low-cost cellular phones, for example. They may now be available for dirt cheap, but the devices are among the very first to treat consumer privacy and security as effectively unworthy of consideration at that price point. So at the same time we're patting ourselves on the back for "bridging the digital divide," we're creating a new paradigm whereby privacy and security are something placed out of reach for those who can't afford it.A similar scenario is playing out on the borrowed school laptop front. Lower income students who need to borrow a school laptop to do their homework routinely find that bargain comes with some monumental trade offs; namely zero expectation of privacy. Many of the laptops being used by lower-income students come with Securly, student-monitoring software that lets teachers see a student’s laptop screen in real time and even close tabs if they discover a student is "off-task."But again, it creates a dichotomy between students with the money for a laptop (innately trusted) and lower income students who are inherently tracked and surveilled:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is PaulT with a response to a complaint about vaccinations:
This Week In Techdirt History: October 10th - 16th
Five Years AgoThis week in 2016, everyone was abuzz about the infamous Trump Access Hollywood recording that had dropped the previous Friday, and we learned about how NBC had delayed a story about it for fear of getting sued — after all, Trump was tossing around the legal threats to newspapers with wild abandon. At the same time, Charles Harder said he was no longer monitoring Gawker (though he was still sending takedown demands), but he was sending out a threat letter on behalf of Melania Trump. We also got some more details on the recent spate of bogus defamation lawsuits being used to block negative reviews.Ten Years AgoThis week in 2011, German collection society GEMA was demanding fees for music it didn't hold the rights to while the Pirate Party was continuing to build support, taking 9% of the vote nationwide in Germany. A Belgian court ordered the blocking of the wrong PirateBay domain, the UK government was admitting it had no evidence for its plans for draconian copyright law, and we wondered why PROTECT IP supporters couldn't just admit the bill was about censorship (while Yahoo was quietly dumping the US Chamber of Commerce over its extremist position on PROTECT IP).Fifteen Years AgoThis week in 2006, the big rumors of the previous week became official when Google acquired YouTube for $1.65-billion in Google stock, which of course led to all kinds of varied opinions on the news and a renewed interest from entertainment companies in threatening to sue... and/or negotiate. Anti-video-game crusader Jack Thompson somehow convinced a judge that he should get to see the entirety of the game Bully before it was released, only to have his hopes of declaring it a public nuisance quickly dashed. We were shocked to see a Disney executive actually admit that piracy is competition, baffled to hear a Sony Pictures UK executive claim that getting rid of release windows was "not technically possible", and amused to see the Christian music industry start making a fuss about piracy as a moral issue.
Trader Joe's Threatens Man Over Parody 'Traitor Joe' Political T-Shirt
The last time we found niche grocery chain Trader Joe's playing intellectual property bully, it was over one enterprising Canadian man who drove across the border, bought a bunch of good stuff from Trader Joe's, and then resold it at his Canadian store called "Pirate Joe's". While that whole setup is entertaining, Trader Joe's sued for trademark infringement in the United States, which made zero sense. The store was in Canada, not the States, reselling purchased items is not trademark infringement, and Trader Joe's was free to open up Canadian stores if it chose.Fast forward to the present and Trader Joe's is trying to stretch trademark law yet again, this time to go after one man's website that is selling parody t-shirts with a picture of Joe Biden and the moniker "Traitor Joe", all mocked up to look like the store logo. Trader Joe's sent a threat letter to the man, Dan McCall, who was represented by friend of the site Paul Alan Levy.
Study Says Official Count Of Police Killings Is More Than 50% Lower Than The Actual Number
In 2019, the FBI claimed to be compiling the first-ever database of police use of force, including killings of citizens by officers. It was, of course, not the first-ever database of police killings. Multiple databases have been created (some abandoned) prior to this self-congratulatory announcement to track killings by police officers.What this database would have, however, is information on use of force, which most private databases didn't track. Whether or not it actually does contain this info is difficult to assess, since the FBI's effort does not compile these reports in any easily-accessible manner, nor does it provide readable breakdowns of the data -- something it does for other things, like crimes against police officers.It also does not have the participation of every law enforcement agency in the nation, which prevents the FBI from collecting all relevant information. It's also voluntary, so even participating agencies are free to withhold incident reports, keeping their own official use-of-force/killing numbers lower than what they actually may be.The problem with underreporting traces back decades, though. The official count of police killings has always been lower than data compiled by non-government databases, which rely almost solely on open-source information like news reports. It would seem the numbers reported by the FBI would be higher, since it theoretically has access to more info, but the FBI's count has repeatedly been lower than outside reporting.A recent study published by The Lancet says the official numbers are wrong. And they're off by a lot. Utilizing outside databases compiled by private citizens/entities and data obtained from the USA National Vital Statistics System (NVSS), the researchers have reached the conclusion that law enforcement self-reporting has resulted in undercounting the number of killings by officers by thousands over the past four decades.
Study Says Official Count Of Police Killings Is More Than 50% Lower Than The Actual Number
In 2019, the FBI claimed to be compiling the first-ever database of police use of force, including killings of citizens by officers. It was, of course, not the first-ever database of police killings. Multiple databases have been created (some abandoned) prior to this self-congratulatory announcement to track killings by police officers.What this database would have, however, is information on use of force, which most private databases didn't track. Whether or not it actually does contain this info is difficult to assess, since the FBI's effort does not compile these reports in any easily-accessible manner, nor does it provide readable breakdowns of the data -- something it does for other things, like crimes against police officers.It also does not have the participation of every law enforcement agency in the nation, which prevents the FBI from collecting all relevant information. It's also voluntary, so even participating agencies are free to withhold incident reports, keeping their own official use-of-force/killing numbers lower than what they actually may be.The problem with underreporting traces back decades, though. The official count of police killings has always been lower than data compiled by non-government databases, which rely almost solely on open-source information like news reports. It would seem the numbers reported by the FBI would be higher, since it theoretically has access to more info, but the FBI's count has repeatedly been lower than outside reporting.A recent study published by The Lancet says the official numbers are wrong. And they're off by a lot. Utilizing outside databases compiled by private citizens/entities and data obtained from the USA National Vital Statistics System (NVSS), the researchers have reached the conclusion that law enforcement self-reporting has resulted in undercounting the number of killings by officers by thousands over the past four decades.
The Surveillance And Privacy Concerns Of The Infrastructure Bill's Impaired Driving Sensors
There is no doubt that many folks trying to come up with ways to reduce impaired driving and making the roads safer have the best of intentions. And yet, hidden within those intentions can linger some pretty dangerous consequences. For reasons that are not entirely clear to me, the giant infrastructure bill (that will apparently be negotiated forever) includes a mandate that automakers would eventually need to build in technology that monitors whether or not drivers are impaired. It's buried deep in the bill (see page 1066), but the key bit is:
GOP Very Excited To Be Handed An FCC Voting Majority By Joe Biden
Consumer groups have grown all-too-politely annoyed at the Biden administration's failure to pick a third Democratic Commissioner and permanent FCC boss nearly eight months into his term. After the rushed Trump appointment of unqualified Trump ally Nathan Simington to the agency (as part of that dumb and now deceased plan to have the FCC regulate social media), the agency now sits gridlocked at 2-2 commissioners under interim FCC head Jessica Rosenworcel.While the FCC can still putter along tackling its usual work on spectrum and device management, the gridlock means it can't do much of anything controversial, like reversing Trump-era attacks on basic telecom consumer protections, media consolidation rules, or the FCC's authority to hold telecom giants accountable for much of, well, anything. If you're a telecom giant like AT&T or Comcast, that's the gift that just keeps on giving.More interesting perhaps is the fact that interim FCC boss Jessica Rosenworcel, whose term expires at the end of the year, hasn't had her term renewed either. That means there's an increasingly real chance the GOP enjoys a 2-1 voting majority at Biden's FCC in the new year:
Plagiarism By Techdirt: Our Plagiarized NFT Collection Can Now Actually Be Bid On
Place your bids on the Plagiarism NFT Collection by Techdirt »A few weeks ago, we wrote about our latest experiment with NFTs (which is part of the research we're doing into NFTs for a deep dive paper I'm working on). There's a very long explanation to explain the NFTs in question and why we're plagiarizing Prof. Brian Frye (but making them much, much cooler). But, after we posted that, we discovered one little problem. The platform that we were using, OpenSea (the most popular and user friendly NFT marketplace)... didn't work. At least not for us. We've spent 3 weeks asking OpenSea to fix things and last night they finally figured out the problem, so that you can now (finally) actually bid in the open auction for our plagiarized set of NFTs about plagiarism.There are tons of reasons to back them -- some good, some less good -- but at the very least, it will help support Techdirt, it will show that culture works by building on those who came before, not by locking up content, and it will let you experiment with NFTs if you haven't already. Also, it'll let you show how maybe people shouldn't freak out over plagiarism all the time -- and when else do you have a chance to do that?The entire collection can be seen here, and they do look amazing, if I do say so myself.
Daily Deal: The Complete 2022 Microsoft Office Master Class Bundle
The Complete 2022 Microsoft Office Master Class Bundle has 14 courses to help you learn all you need to know about MS Office products to help boost your productivity. Courses cover SharePoint, Word, Excel, Access, Outlook, Teams, and more. The bundle is on sale for $75.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Clearview Celebrates 10 Billion Scraped Images Collected, Claims It Can Now Recognize Blurred, Masked Faces
Clearview's not going to let several months of bad press derail its plans to generate even more negative press. The facial recognition tech company that relies on billions of scraped images from the web to create its product is currently being sued in multiple states, has had its claims about investigative effectiveness repeatedly debunked and, most recently, served (then rescinded) a subpoena to transparency advocacy group Open the Government demanding information on all its Clearview-related FOIA requests as well as its communications with journalists.I don't know what Clearview is doing now. Maybe it thinks it can still win hearts and minds by not only continuing to exist but also by getting progressively worse in terms of integrity and corporate responsibility. Whatever it is that Clearview's doing to salvage its reputation looks to be, at best, counterproductive. I mean, the only way Clearview could get worse is by getting bigger, which is exactly what it's done, according to this report by Will Knight for Wired.
Journalists In St. Louis Discover State Agency Is Revealing Teacher Social Security Numbers; Governors Vows To Prosecute Journalists As Hackers
Last Friday, Missouri's Chief Information Security Officer Stephen Meyer stepped down after 21 years working for the state to go into the private sector. His timing is noteworthy because it seems like Missouri really could use someone in their government who understands basic cybersecurity right now.We've seen plenty of stupid stories over the years about people who alert authorities to security vulnerabilities then being threatened for hacking, but this story may be the most ridiculous one we've seen. Journalists for the St. Louis Post-Dispatch discovered a pretty embarrassing leak of private information for teachers and school administrators. The state's Department of Elementary and Secondary Education (DESE) website included a flaw that allowed the journalists to find social security numbers of the teachers and administrators:
Billy Mitchell Survives Anti-SLAPP Motion From Twin Galaxies A Second Time
The Billy Mitchell and Twin Galaxies saga rolls on, it seems. Mitchell has made it onto our pages several times in the past, most recently over a lawsuit filed against gaming record keepers Twin Galaxies over its decision to un-award his high score record for Donkey Kong on allegations he achieved it on an emulator instead of an official cabinet. The suit is for defamation and Twin Galaxies initially tried to get the case tossed on anti-SLAPP grounds, but the court denied that request under the notion that Mitchell only has to show "minimal merit" in the overall case to defeat the anti-SLAPP motion.And now, on appeal, California's Second Appellate court has affirmed that ruling, again on "minimal merit" grounds. You can read the entire ruling embedded below, though I warn you that there are many pages dedicated to the back and forth between Mitchell and Twin Galaxies over a video game record, so you may come away with sore eyebrows from rolling your eyes so hard at all of this. There is also a metric ton of context as to how the court is supposed to apply the anti-SLAPP statute. Go nerd out if you like, but the whole ruling boils down to this:
Content Moderation Case Study: Tumblr's Approach To Adult Content (2013)
Summary: There are unique challenges in handling adult content on a website, whether it’s an outright ban, selectively allowed, cordoned off under content warnings, or (in some cases) actively encouraged.Tumblr’s early approaches to dealing with adult content on its site is an interesting illustration in the interaction between user tagging and how a site’s own tools interact with such tags.Tumblr was launched in 2007 as a simple “blogging” platform that was quick and easy to setup, but would allow users to customize it however they wanted, and use their own domain names. One key feature of Tumblr that was different from other blogs was an early version of social networking features — such as the ability to “follow” other users and to then see a feed of those users you followed. While some of this was possible via early RSS readers, it was both technologically clunky and didn’t really have the social aspect of knowing who was following you or being able to see both followers and followees of accounts you liked. Tumblr was also an early pioneer in reblogging — allowing another user to repost your content with additional commentary.Because of this more social nature, Tumblr grew quickly among certain communities. This included communities focused on adult content. In 2013, it was reported that 11.4% of Tumblr’s top domains were for adult content. In May of 2013, Yahoo bought Tumblr for $1.1 billion, with an explicit promise not to “screw it up.” Many people raised concerns about how Yahoo would handle the amount of adult content on the site, but the company’s founder, David Karp, insisted that they had no intention of limiting such content.
Court Says Google Translate Isn't Reliable Enough To Determine Consent For A Search
The quickest way to a warrantless search is obtaining consent. But consent obtained by officers isn't always consent, no matter how it's portrayed in police reports and court testimony. Courts have sometimes pointed this out, stripping away ill-gotten search gains when consent turned out to be [extremely air quotation marks] "consent."Such is the case in this court decision, brought to our attention by FourthAmendment.com. Language barriers are a thing, and it falls on officers of the law to ensure that those they're speaking with understand clearly what they're saying, especially when it comes to actions directly involving their rights.It all starts with a stop. A pretextual one at that, as you can see by the narrative recounted by the court.
University Of Hong Kong Wants To Remove A Sculpture Commemorating Tiananmen; To Preserve It, People Have Crowdsourced A Digital 3D Replica
As Techdirt has chronicled, the political situation in Hong Kong becomes worse by the day, as the big panda in Beijing embraces a region whose particular freedoms were supposed to be guaranteed for another 25 years at least. One manifestation of the increasing authoritarianism in Hong Kong is growing censorship. The latest battle is over a sculpture commemorating pro-democracy protesters killed during China's 1989 crackdown in Tiananmen Square, and on display in the University of Hong Kong. South China Morning Post reports:
Prosecutors Drop Criminal Charges Against Fake Terrorist Who Duped Canadian Gov't, NYT Podcasters
For a couple of years, a prominent terrorist remained untouched by Canadian law enforcement. Abu Huzayfah claimed to have traveled to Syria in 2014 to join the Islamic State. A series of Instagram posts detailed his violent acts, as did a prominent New York Times Peabody Award-winning podcast, "Caliphate."But Abu Huzayfah, ISIS killer, never existed, something the Royal Canadian Mounted Police verified a year before the podcast began. Despite that, Ontario resident Shehroze Chaudhry -- who fabricated tales of ISIS terrorist acts -- remained a concern for law enforcement and Canadian government officials, who believed his alter ego was not only real, but roaming the streets of Toronto.All of this coalesced into Chaudhry's arrest for the crime of pretending to be a terrorist. Chaudry was charged with violating the "terrorism hoax" law, which is a real thing, even though it's rarely used. Government prosecutors indicated they intended to argue Chaudhry's online fakery caused real world damage, including the waste of law enforcement resources and the unquantifiable public fear that Ontario housed a dangerous terrorist.Chaudry was facing a possible sentence of five years in prison, which seems harsh for online bullshit, but is far, far less than charges of actual terrorism would bring. But it appears everything has settled down a bit and the hoaxer won't be going to jail for abusing the credulity of others, a list that includes Canadian government officials and New York Times podcasters.
Broadband Data Caps Mysteriously Disappear When Competition Comes Knocking
We've noted for years how broadband data caps (and monthly overage fees) are complete bullshit. They serve absolutely no technical function, and despite years of ISPs trying to claim they "help manage network congestion," that's never been remotely true. Instead they exist exclusively as a byproduct of limited competition. They're a glorified price hike by regional monopolies who know they'll see little (or no!) competitive or regulatory pressure to stop nickel and diming captive customers.The latest case in point: Cox Communications employs a 1,280 GB data cap, which, if you go over, requires you either pay $30 per month more for an additional 500 GB, or upgrade your plan to an unlimited data offering for $50 more per month. While Cox's terabyte-plus plan is more generous than some U.S. offerings (which can be as low as a few gigabytes), getting caught up in whether the cap is "fair" is beside the point. Because, again, it serves absolutely no function other than to impose arbitrary penalties and additional monthly costs for crossing the technically unnecessary boundaries.And, mysteriously, when wireless broadband providers begin offering fixed wireless services over 5G services in limited areas, Cox lifts the restrictions completely to compete:
Public Backlash Leads Tulsa Park To Stop Bullying Coffee Shop Over Trademark
A good public outcry and backlash can lead to many, many good things. We see it here at Techdirt all the time, particularly when it comes to aggressive bullying episodes over intellectual property. Some person or company will try to play IP bully against some victim, the public gets wind of it and throws a fit, and suddenly the necessity over the IP action goes away. Retailers, manufacturers, breweries: public outcry is a great way to end ridiculous legal actions.A recent example of this comes out of Tulsa, OK, where a riverside park of all places decided it had to sue a coffee shop over a similar, if fairly generic, name. Gathering Place is a park in Tulsa, a... place... where people... you know... gather. The Gathering Place is a coffee shop in Shawnee, 90 miles from Tulsa, where people get coffee and, I imagine, occasionally gather. But despite any gathering similarities, coffee shops are not parks and 90 miles is a fairly long way away. Which makes a lawsuit over trademark infringement brought by the park very, very strange.
LinkedIn Caves Again, Blocks US Journalists' Accounts In China
LinkedIn -- the business-oriented social media platform owned by Microsoft -- has spent the last few years increasing its compliance with the Chinese government's demands for censorship. A couple of years back, the network drew heat for not only blocking accounts of Chinese pro-democracy activists but also critics of the government located elsewhere in the world.The blocking only occurred in China, but that was enough to cause PR trouble for LinkedIn, which restored some of the accounts following some deserved backlash. The Chinese government didn't care much for LinkedIn's temporary capitulations so it turned up the heat. After failing to block enough content, the Chinese government ordered LinkedIn's local office to perform a self-audit and report on its findings to the country's internet regulator. It was also blocked from signing up any new Chinese citizens for 30 days.The pressure appears to have worked. China is again asking for censorship of voices it doesn't like. And, again, LinkedIn is complying. Here's the report from Bethany Allen-Ebrahimian of Axios, who was one of those targeted by the latest round of account blocking.
Prudish Mastercard About To Make Life Difficult For Tons Of Websites
For all the attention that OnlyFans got for its shortlived plan to ban sexually explicit content in response to "pressures" from financial partners, as we've discussed, it was hardly the only website to face such moderation pressures from financial intermediaries. You can easily find articles from years back highlighting how payment processors were getting deeply involved in forcing website to moderate content.And the OnlyFans situation wasn't entirely out of nowhere either. Back in April we noted that Mastercard had announced its new rules for streaming sites, and other sites, such as Patreon, have already adjusted their policies to comply with Mastercard's somewhat prudish values.However, as those new rules that were announced months ago are set to become official in a few days, the practical realities of what Mastercard requires are becoming clear, and it's a total mess. Websites have received "compliance packages" in which they have to set up a page to allow reports for potential abuse. In theory, this sounds reasonable -- if there really is dangerous or illegal activity happening on a site, making it easier for people to report it makes sense. But some of it is highly questionable:
In Latest Black Eye For NSO Group, Dubai's King Found To Have Used NSO Spyware To Hack His Ex-Wife's Phone
NSO Group has endured some particularly bad press lately, what with leaked data pointing to its customers' targeting of journalists, political figures, religious leaders, and dissidents. That its powerful spyware would be abused by its customers was not surprising. Neither were the findings from the leaked data, which only confirmed what was already known.Despite this, NSO continues to make contradictory claims. First, it says it has no control (or visibility) as to how its customers use its products -- customers that include some notorious abusers of human rights. Second, it says that it cuts off customers who abuse its products to target people who only annoy their governments, rather than directly threaten it with criminal or terrorist acts.Well, it's either one or the other. And if NSO is waiting for secondhand reports about abusive deployments to act, it really shouldn't be in the intel business. If NSO wants to stay above the fray, it could start by being a lot more selective about who it sells to.If you're not selective, your customers will not only pettily target people (critics, activists, journalists, dissidents) the government doesn't like but will move on to the extreme pettiness of targeting people certain government officials don't like.This latest nadir for NSO Group comes courtesy of court proceedings, which illustrates the danger of putting powerful cellphone exploits in the hands of the wrong people.
Facebook's Nick Clegg Makes It Clear: If You're Looking To Undermine Section 230, That's EXACTLY What Facebook Wants
Facebook policy dude/failed UK politician Nick Clegg has written an op-ed for USA Today confirming what has been obvious to everyone who understands Section 230, but (for reasons I don't quite understand) seems obscured from basically every politician out there: >Facebook wants to destroy Section 230. And it's practically giddy that politicians are so eager to grant it its wish, while pretending that doing so will somehow hurt Facebook.It remains absolutely bizarre to me that many people still believe that getting rid of Section 230 (or even reforming it) is a way to "stop" or "hurt" Facebook. Section 230 is a protection for the users of the internet more than it is for the companies. By making it clear that companies are not liable for user speech, it makes more websites willing to host user speech, especially smaller ones which could easily be sued out of existence. Indeed, over the last couple of years, it's become clear that Facebook desperately wants to kill Section 230 because it knows that it alone has enough money to handle the liability, and removing Section 230 will really only burden the startups that threaten to take users away from Facebook.A year and a half ago, Mark Zuckerberg made it clear that he was cool with getting rid of Section 230. Earlier this year, he suggested a "reform proposal" that was effectively gutting 230 in extremely anti-competitive ways. And for months now, Facebook has blanketed DC (and elsewhere, but mostly DC) with commercials and ads that don't say Section 230, but refer obliquely to "comprehensive internet regulations" passed in "1996." That's Section 230 that they're talking about.This is why it's so ridiculous that the takeaway of some people to the Facebook whistleblower last week was that Section 230 needs to change. That's exactly what Facebook wants, because it will cement Facebook's dominant position and make it that much more difficult for competitors to emerge or succeed.Here's what Clegg had to say:
Daily Deal: The 2022 Adobe Creative Cloud Training Bundle
The 2022 Adobe Creative Cloud Training Bundle has 7 courses to help you learn all about Adobe's creative software suite. You'll learn how to design apps in Adobe XD, how to master Photoshop, how to use animation in After Effects, how to design projects in Illustrator, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Google, Amazon, And Microsoft Are Using Third Party Companies To Sell Surveillance Tech To ICE, CBP
A few years ago, tech companies stood up to the US government, issuing statements objecting to immigration policies instituted by the Trump Administration and, in some cases, threatening to pull contracts with ICE (Immigration and Customs Enforcement) and the CBP (Customs and Border Protection).It wasn't much of a stand, however. And whatever statements were issued by companies like Google, Microsoft, and Amazon were mainly prompted by hundreds of employees who wished to work for companies that didn't aid and abet in civil liberties violations, and ongoing mistreatment of immigrants and their families.Whatever statements came out of the front end of these companies haven't been matched by the backend. According to a new report by Caroline Haskins for Business Insider, Google, Microsoft, and Amazon are still selling plenty of tech and software to ICE and CBP. They're just getting better at hiding it. (Alt link)
Charter Spectrum Threatens To Ruin Potential Customers Over Debt They Don't Owe
There's a reason U.S. cable and broadband companies have some of the worst customer satisfaction ratings of any companies, in any industry in America. The one/two punch of lagging broadband competition and captured regulators generally mean there's little to no meaningful penalty for overcharging users, providing lackluster services and support, and generally just being an obnoxious ass.Case in point: a new Charter (which operates under the Spectrum brand) marketing effort apparently involves threatening to ruin the credit scores of ex-customers unless they re-subscribe to the company's services. It basically begins with a letter that threatens ex-users that they'll be reported to debt collectors unless they sign up for service. It proceeds to inform them the letter is a "one-time courtesy" allowing them to sign up for cable or broadband service before the debt collector comes calling:
Twitch, Others, Ban Amouranth Yet Again, Once Again With Zero Transparency
Regular readers here will by now likely be familiar with Twitch streamer "Amouranth". She has made it onto our pages as part of the year-long mess that Amazon's Twitch platform appears to be making for itself, during which it has demonstrated its willingness to both treat its creative community quite poorly and fail to properly communicate that poor treatment to much of anyone at all. For instance, Twitch has temporarily banned or kept Amouranth from live-streaming several times, all likely due to the content of her streams. That content seems nearly perfectly designed to poke the line on Twitch's streaming guidelines, including so-called "hot tub streaming" and ASMR streams. Twitch has never been great about explaining the reasons for bans like these, but in the past it has at least linked to the offending content so that a streamer knows which videos were objectionable. But with some, including Amouranth, Twitch often times doesn't even bother doing that, such as when it demonetized Amouranth's videos without warning or explanation.So, while Twitch, quite frankly, now has far, far bigger issues on its hands, it's worth pointing out that Twitch has yet again banned Amouranth without warning or explanation. Though, it appears this time Twitch has some friends tagging along in Instagram and TikTok.
Alabama Supreme Court Rules Law Enforcement Can Withhold Almost All Records Indefinitely
Here's what you need to know about Alabama and its public records laws before we head to a depressing state Supreme Court opinion that makes everything worse:
LA Sheriff's Handpicked 'Public Integrity Unit' Doing Little More Than Harassing And Intimidating The Department's Critics
The Los Angeles Sheriff's Department is apparently incapable of being reformed. Over the years, the LASD has run an illegal prison informant program, one that culminated in an FBI investigation during which the LASD threatened FBI agents and federal witnesses.But what can one really expect from an agency willing to staff itself with statutory rapists, thieves, and cops considered unable to be hired anywhere else? The department is so internally corroded it has become the home for gangs and cliques of rogue officers who revel in deploying excessive force and violating rights.The only thing that can bring the LASD down is its critics and its oversight. The Department knows this and that's why it's taking action to clean itself up. Oh wait, it's the other thing.
Facebook Banning & Threatening People For Making Facebook Better Is Everything That's Wrong With Facebook
Regular readers know that I'm a believer in trying to get the big internet companies to embrace a more protocols over platforms approach, in which they're building something that others can then build on as well, and improve in their own ways (without fear of having the rug pulled out from under them). It's why I'm hopeful about Twitter working on just such a plan with its Bluesky project. Facebook, unfortunately, takes a very different view of the world.While I understand that some of Facebook's thinking around this is a reaction to what happened when it had created a more open platform for developers... and thenCambirdge Analytica happened, which has been an ongoing (if somewhat confusingly understood) black eye for the company. But Facebook has always been a bit skittish about how open it has wanted to be. Famously, it killed Power.com with an unfortunate reading of the CFAA when that company tried to create a universal login for various social media sites, and to help people not be locked in to just one social media site.But the latest example is really horrible. Louis Barclay has a write up in Slate about how Facebook banned him for life and threatened him with a lawsuit, because he created a tool to make everyone's Facebook experience better (though, less profitable for Facebook). The tool actually sounds quite nifty:
Why Section 230 'Reform' Effectively Means Section 230 Repeal
Some lawmakers are candid about their desire to repeal Section 230 entirely. Others, however, express more of an interest to try to split this baby, and "reform" it in some way to somehow magically fix all the problems with the Internet, without doing away with the whole thing and therefore the whole Internet as well. This post explores several of the types of ways they propose to change the statute, ostensibly without outright repealing it.And several of the reasons why each proposed change might as well be an outright repeal, given each one's practical effect.But before getting into the specifics about why each type of change is bad, it is important to recognize the big reason why just about every proposal to change Section 230, even just a little bit, undermines it to the point of uselessness: because if you have to litigate whether Section 230 applies to you, you might as well not have it on the books in the first place. Which is why there's really no such thing as a small change, because if your change in any way puts that protection in doubt, it has the same debilitating effect on online platform services as an actual repeal would have.This was a key point we keep coming back to, including in suggesting that Section 230 operates more as a rule of civil procedure than any sort of affirmative subsidy (as it is often mistakenly accused of being). Section 230 does not do much that the First Amendment would not itself do to protect platforms. But the crippling expense of having to assert one's First Amendment rights in court, and potentially at an unimaginable scale given all the user-generated content Internet platforms facilitate, means that this First Amendment protection is functionally illusory if there's not a mechanism to get platforms out of litigation early and cheaply. It is the job of Section 230 to make sure they can, and that they won't have to worry about being bled dry in legal costs having to defend themselves even where, legally, they have a defense.Without Section 230 their only choice would be to not engage in the activity that Section 230 explicitly encourages: intermediating third party content, and moderating it. If they don't moderate it then their services may become a cesspool, but if the choice they face is either to moderate, or to potentially be bankrupted in litigation (or even, as in the case of FOSTA, potentially prosecuted), then they won't. And as for intermediating content, if they can get into legal trouble for allowing the wrong content, then they will either host less user-generated content, or not be in the business of hosting any user content at all. Because if they don't make these choices, they set themselves up to be crushed by litigation.Which is why it is not even the issue of ultimate liability that makes lawsuits such an existential threat to an Internet platform. It's just as bad if the lawsuit that crushes them is over whether they were entitled to the statutory liability protection needed to avoid the lawsuit entirely. And we know lawsuits can have that annihilating effect when platforms are forced to litigate these questions. One conspicuous example is Veoh Networks, a video-hosting service who today should still be a competitor to YouTube. But it isn't a competitor because it is no longer a going concern. It was obliterated by the costs of defending its entitlement to assert the more conditional DMCA safe harbor defense, even though it won! The Ninth Circuit found the platform should have been protected. But by then it was too late; the company had been run out of business, and YouTube lost a competitor that, today, the marketplace still misses.It would therefore be foolhardy and antithetical to lawmakers' professed interest in having a diverse ecosystem of Internet services were they to do anything to make Section 230 similarly conditional, thereby risking even further market consolidation than we already have. But that's the terrible future that all these proposals tempt.More specifically, here's why each type of proposal is so infirm:Liability carve-outs. One way lawmakers propose to change Section 230 is to deny its protection to specific forms of liability that may arise in user content. A variety of these liability carve-outs have been proposed, and all require further scrutiny. For instance, one popular carve-out with lawmakers is trying to make Section 230 useless against claims of liability for posts that allegedly violate anti-discrimination laws. But while on first glace such a carve-out may seem innocuous, we know that it's not. And one way it's not is because people eager to discriminate themselves have shown themselves keen to try to force platforms to help them do it, including by claiming that anti-discrimination laws serve to protect their own efforts to discriminate. So far they have largely been unable to conscript platforms into enabling their hate, but if Section 230 no longer protects platforms from these forms of liability, then racists will finally be able to succeed by exploiting that gap.These carve-outs also run the risk of making it harder for people who have been discriminated against from finding a place to speak out about it, since it will force platforms to be less willing to offer space to speech that they might find themselves forced to defend, because even if the speech were defensible just having to answer for it can be ruinous for the platform. We know that they will feel forced to turn away all sorts of worthy and lawful speech if that's what they need to do to protect themselves, because we've seen this dynamic play out as a result of the few carve-outs Section 230 has had from the start. For example, if the thing wrong with the user expression was that it implicated an intellectual property right, then Section 230 didn't protect the platform from liability in their users' content. Now, it turns out that platforms have some liability protection via the DMCA, but this protection is weaker and more conditional than Section 230, which is why we see all the swiss cheese online with videos and other content so often removed – even in cases when they were not actually infringing – because taking it down is the only way platforms can avoid trouble and not run the risk of going the way of Veoh Networks themselves.Such an outcome is not good for encouraging free expression online, which was a main driver behind passing Section 230 originally, and it isn't even good for the people these carve outs were ostensibly intended to help, which we saw with FOSTA, which was an additional liability carve-out more recently added. With FOSTA, instead of protecting people from sexual exploitation, it led to platforms taking away their platform access, which drove them into the streets, where they got hurt or killed. And, of course, it also led to other perfectly lawful content disappearing from the Internet, like online dating and massage therapy ads, since FOSTA had made it impossibly risky for the platforms to continue to facilitate it.It's already a big problem that there are even just these liability carve-outs. If Section 230 were to be changed in any way, it should be changed to remove them. But in any case, we certainly shouldn't be making any more if Section 230 is still to maintain any utility in protecting the platforms we need to facilitate online user expression.Transactional speech carve-outs. As described above, one way lawmakers are proposing to change Section 230 is to carve out certain types of liability that might attach to user-generated content. Another way is to try to carve out certain types of user expression itself. And one specific type of user expression in lawmakers' crosshairs (and also some courts') is transactional speech.The problem with this invented exception to Section 230 is that transactional speech is still speech. "I have a home to rent" is speech, regardless of whether it appears on a specialized platform that only hosts such offers, or more general purpose platforms like Craigslist or even Twitter where such posts are just some of the kinds of user expression enabled.Lawmakers seem to be getting befuddled by the fact that some of the more specialized platforms may earn their money through a share of any consummated transaction their user expression might lead to, as if this form of monetization were somehow meaningfully distinct from any other monetization model, or otherwise somehow waived their First Amendment right to do what basically amounts to moderating speech to the point where it is the only type of user content they allow. And it is this apparent befuddlement that has led to attempts by lawmakers to tie Section 230 protection to certain monetization models and go so far as to eliminate it for certain ones.Even these proposals were carefully drafted such proposals they would only end up chilling e-commerce by forcing platforms to use less-viable monetization models. But what's worse is that the current proposals are not being carefully drafted, and so we end up seeing bills end up threatening the Section 230 protection of any platform with any sort of profit model. Which, naturally, they all need to have in some way. After all, even non-profit platforms need some sort of income stream to keep the lights on, but proposals like these threaten to make it all but impossible to have the money needed for any platform to operate.Mandatory transparency report demands. As we've discussed before, it's good for platforms to try to be candid about their moderation decisions and especially about what pressures forced them to make these decisions, like subpoenas and takedown demands, because it helps highlight when these instruments are being abused. Such reports are therefore a good thing to encourage.But encouragement is one thing; requiring them is another, but that's what certain proposals try to do in conditioning Section 230 protection to the publication of these reports. And they are all a problem. Making transparency reports mandatory is an unconstitutional form of compelled speech. Platforms have the First Amendment right to be arbitrary in their moderation practices. We may prefer them to make more reasoned and principled decisions, but it is their right not to. But they can't enjoy that right if they are forced to explain every decision they've made. Even if they wanted to, it may be impossible, because content moderation is happening at scale, which inherently means it will never be perfect, and it also may be ill-advised to be fully transparent because it teaches bad actors how to have their systems.Obviously a platform could still refuse to produce the reports as these bills would prescribe. But if that decision risks the statutory protection the platform depends on to survive, then it is not really much of a decision. It finds itself compelled to speak in the way that the government requires, which is not constitutional. And it also would end up impinging on that freedom to moderate, which both the First Amendment and Section 230 itself protect.Mandatory moderation demands. But it isn't just transparency in moderation decisions that lawmakers want. Some legislators are running straight into the heart of the First Amendment and demanding that they get to dictate how platforms get to do any of their moderation by conditioning Section 230 protection to the platforms making these decisions the way the government insists.These proposals tend to come in two political flavors. While they are generally utterly irreconcilable – it would be impossible for any platform to simultaneously satisfy both of them at the same time – they each boil down to the same unconstitutional demand.Some of these proposals reflect legislative outrage at platforms for some of the moderation decisions they've made. Usually they condemn platforms for having removed certain speech or even banned certain speakers, regardless of how poor their behavior or how harmful the things those speakers said. This condemnation leads lawmakers who favor these speakers and their speech to want to take away the platforms' right to make these sorts of moderation decisions by, again, conditioning Section 230 on their continuing to leave these speakers and speech up on these systems. The goal with these proposals is to set up the situation where it is impossible for platforms to continue to exercise their First Amendment discretion in moderation and possibly take them down, lest they lose the protection they depend on to exist. Which is not only unconstitutional compulsion, but also itself ultimately voids the part of Section 230 that expressly protects that discretion, since it's discretion that platforms can no longer exercise.On the flip side, instead of conditioning Section 230 on not removing speakers or speech, other lawmakers would like to condition Section 230 on requiring platforms to kick off certain speakers and speech (and sometimes even the same ones that the other proposals are trying to keep up). Which is just as bad as the other set of proposals, for all the same reasons. Platforms have the constitutional right to make these moderation choices however they choose, and the government does not have the right, per the First Amendment, to force them to make them in any particular way. But if their critical Section 230 protection can be taken away if they don't moderate however the sitting political power demands at the moment, then that right has been impinged and Section 230 rendered a nullity.Algorithmic display carve-outs. Algorithmic display has become a target for many lawmakers eager to take a run at Section 230. But as with every other proposed reform, changing Section 230 so that it no longer applies to platforms using algorithmic display would end up obliterating the statute for just about everyone. And it's not quite clear that lawmakers proposing these sorts of changes quite realize this inevitable impact.And part of the problem seems to be that they don't really understand what an algorithm is, or how commonly they are used. They seem to regard it as something nefarious, but there's nothing about an algorithm that inherently is. The reality is that nearly every platform uses software in some way to handle the display of user-provided content, and algorithms are just the programming logic coded into the software giving it the instructions for how to display that content. Moreover, these instructions can even be as simple as telling the software to display the content chronologically, alphabetically, or some other relevant way the platform has decided to render content, which the First Amendment protects. After all, a bookstore can decide to shelve books however it wants, including in whatever order or with whatever prominence it wants. What these algorithms do is implement these sorts of shelving decisions, just as applied to the online content a platform displays.If algorithms were to end up banned by making the Section 230 protection platforms need to host user-generated content contingent on not using them, it would make it impossible for platforms to actually render any of that content. They either couldn't do it technically, if they were to abide by this rule withholding their Section 230 protection, or legally if that protection were to be withheld because they used this display. Such a rule would also represent a fairly significant change to Section 230 itself by gutting the protection for moderation decisions, since those decisions are often implemented by an algorithm. In any case, conditioning Section 230 on not using algorithms is not a small change but one that would radically upend the statutory protection and all the online services it enables.Terms of Service carve-outs. One idea (which is, oddly, backed by Facebook, even though it needs Section 230 to remain robust in order to defeat litigation like this) is that Section 230 protection should be contingent on platforms upholding their terms of service. As with these other proposals, this one is also a bad idea.First of all, it negates the utility of Section 230 protection by making its applicability the subject of litigation. In other words, instead of being protected from litigation, platforms will now have to litigate whether they are protected from litigation, which means they aren't really protected at all.It also fails to understand what terms of service are for. Platforms have them in order to limit their liability exposure. There's no way that they are going to write them in a way that has the effect of increasing their liability exposure.The way they are generally written now is to put potentially wayward users on notice that if they don't act consistently with these terms of service, the service may be denied them. They aren't written to be affirmative promises to do anything because they can't be affirmative promises – content moderation at scale is impossible to do perfectly, so it would be foolish for platforms to obligate themselves to do the impossible. But that's what changing Section 230 in this way would do, create this obligation if platforms are to retain their needed protection.This pipe dream that some seem to have, that if only platforms did more moderation in accordance with their terms of service as currently written, everything would be perfect and wonderful is hopelessly naïve. After all, nothing about how the Internet works is nearly that simple. Nevertheless, it is fine to want platforms to do as much as they can to meet the aspirational goals they've articulated in their terms of service. But changing Section 230 in this way won't lead them to. Instead it will make it legally unsafe for platforms to even articulate any such aspirations and thus less likely to meet any of them. Which means that regulators won't get more of what they seek with this sort of proposal, but less.Pre-emption elimination. One of the key clauses that makes Section 230 useful is its pre-emption provision. This is the provision that tells states that they cannot rejigger their own state laws in ways that would interfere with the operation of Section 230. The reason it is so important is because it gives the platforms the certainty they need to be able to benefit from the statute's protection. For it to be useful they need to know that it applies to them and that states have no ability to mess with it.Unfortunately we are already seeing increasing problems with state and local jurisdictions attempting to ignore this pre-emption provision, and courts even sometimes letting them. But on top of that there are proposals in Congress to deliberately undermine it. In fact, with FOSTA, it already has been undermined, with individual state governments now able to impose liability directly on platforms for their user activity, no matter how arbitrarily.We see with the moderation bills an illustration of what is wrong with states getting to mess with Section 230 and make its protection suddenly conditional – and therefore effectively useless. Given our current political polarity, the problem should be obvious: how is any platform going to reconcile the moderation demands of a Red State with the moderation demands of a Blue State? What is an inherently interstate Internet platform to do? Whose rules should they follow? What happens to them if they don't?Congress put in the pre-emption provision because it knew that platforms could not possibly comply with all the myriad rules and regulations that every state, county, city, town, and locality might develop to impose liability on platforms. So it told them all to butt out. It's a mistake to now gut that provision if Section 230 is going to still have any value in making it safe for platforms to continue to do their job enabling the Internet.
Daily Deal: The Complete 2021 Learn Linux Bundle
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $59.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Report: TSA Is Spending $1 Billon On Bag Scanners That 'May Never Meet Operational Needs'
Somehow, "TSA" stands for "The Terrorists Won." In exchange for endless inconveniences, inconsistently deployed security measures, and a steady stream of intrusive searches and rights violations, we've obtained a theatrical form of security that's more performative than useful.Since screeners continue to miss nearly every piece of contraband traveling through security checkpoints, the TSA has opted to buy even more screening equipment. Apparently, it's hoping no one will say it's not doing anything about these failures. It is throwing money at the problem. That's something. Unfortunately, it doesn't appear to be solving it.A new report [PDF] from the DHS Inspector General says the shiny new scanners the agency bought to "address capability gaps in carry-on bag screening" aren't doing that now, and perhaps never will. The TSA obtained 300 computed tomography (CT) scanners, which were supposed to detect a broader range of explosives and make flying slightly less inconvenient by allowing passengers to keep their fluids and their laptops in their respective bags. The ultimate goal is safer flying, less hassle at checkpoints, and faster throughput. It has achieved none of these goals, despite more than $1 billion being obligated towards the rollout of CT scanners nationwide.Instead of meeting its own four-factor test for essential capabilities, the TSA's new toys fell short of every self-imposed metric.
Most People Probably Don't Need A VPN, Experts Now Advise
Given the seemingly endless privacy scandals that now engulf the tech and telecom sectors on a near-daily basis, many consumers have flocked to virtual private networks (VPN) to protect and encrypt their data. One study found that VPN use quadrupled between 2016 and 2018 as consumers rushed to protect data in the wake of scandals, breaches, and hacks.Unfortunately, many consumers are flocking to VPNs under the mistaken impression that such tools are a near-mystical panacea, acting as a sort of bulletproof shield that protects them from any potential privacy violations on the internet. Not only is that not true (ISPs, for example, have a universe of ways to track you anyway), many VPN providers are even less ethical than privacy-scandal-plagued companies or ISPs.After a repeated few years where VPN providers were found to be dodgy or tracked user data when they claimed they didn't, professionals have shifted their thinking on recommending even using one. While folks requiring strict security over wireless may still benefit from using a reputable VPN provider, experts say the landscape has changed. Improvements in the overall security of ordinary browsing (bank logins, etc.), plus the risk of choosing the wrong VPN provider, means that many people may just be better off without one:
This Week In Techdirt History: October 3rd - 9th
Five Years AgoThis week in 2016, the Trump's campaign was reacting to the leaked pages of his 1995 tax returns by threatening to sue the New York Times, and also reacting to some ads from the Clinton campaign by threatening to sue them, too — while at the same time, the campaign was facing its own bogus threat from the Phoenix Police over imagery of cops in an ad. The big story, though, was the revelation that Yahoo had secretly built email scanning software under pressure from the feds. This led to basically every other tech company rapidly denying that they'd done the same, followed by Yahoo itself issuing a tone-deaf non-denial denial of the report. The media was very confused about the story, with the New York Times and Reuters claiming totally different explanations for the email scanning, and over the course of the week even more disagreements and confusion arose.Ten Years AgoThis week in 2011, countries around the world were signing ACTA and finally admitting that it meant they'd have to change their copyright laws, while Brazil was drafting its own anti-ACTA framework for the internet. The Supreme Court declined to consider an appeals court ruling that properly stated music downloads are not public performances, though this didn't mean (as some claimed) that downloading had been legalized. Meanwhile, another judge dismissed a lawsuit over streaming video, but mostly avoided the larger copyright questions, and we saw a set of good rulings against copyright trolls, and one bad one.This was also the week that Steve Jobs died at age 56.Fifteen Years AgoThis week in 2006, Facebook was getting a start on its soon-to-be-tradition of threatening people who make useful third-party tools. Amazon was abandoning its attempt to make an early version of something like Street View, and Wal-Mart was abandoning its much more stupid attempt to offer a MySpace clone. The fight between Belgian news publishers and Google was continuing, while the copyright fight over My Sharona was dragging in Yahoo, Amazon and Apple. And the big news — though it was still just a rumor with lots of conflicting information going around, making it hard to tell if it was true — was that Google was planning to buy YouTube for $1.6-billion.
Connecticut Supreme Court Says Cops Need Warrants To Run Drug Dogs Around Motel Room Doors
Drug dogs are man's best friend, if that man happens to be The Man. "Probable cause on four legs" is the unofficial nickname of these clever non-pets, which give signals only their handlers can detect which give cops permission to perform searches that otherwise would require a warrant.They're normally seen at traffic stops and border checkpoints, but they're also used to sniff other places cops want to search but don't want to get a warrant to do so. This has led to a few legal issues for law enforcement, with courts occasionally reminding them that a dog sniff is a search and, if the wrong place is sniffed, it's a constitutional violation.The top court in Connecticut has curtailed the use of drug dogs in certain areas, finding that sniffs are still searches and these searches are unreasonable under the state constitution if performed in certain areas -- namely outside the doors of motel rooms. (via FourthAmendment.com)In this case, police officers allowed their dog to sniff at doors of motel rooms until it alerted on a door. Using this quasi-permission, officers entered the room and found contraband. The government argued that even if it was a search, it was performed in a place (a hotel or motel) where citizens would have a lowered expectation of privacy, considering the fact the rooms are only rented, utilized for only a short time, and accessible by hotel staff.In a really well-written opinion [PDF], the court reminds the government that a lowered expectation of privacy is not the same as a nonexistent expectation of privacy. And, more importantly, it reminds them that, while motel rooms may not have the sanctity of people's permanent homes, it is a home away from home and afforded more protection than, say, a car parked on the curb of a public road.The court addresses all of the government's arguments and finds none of them persuasive.
Texas Pols Shocked To Learn Their Bill Let Gas Companies Off The Hook For Climate Change Preparedness
Having covered telecom for a long time, I've lost track of the times I've watched some befuddled lawmaker shocked by the content of their own bill. Usually, that's because they outsourced the writing of it to their primary campaign contributors, which in telecom is usually AT&T, Verizon, Comcast, and Charter. Sometimes they're so clueless to what their "own" bill includes they'll turn to lobbyists in the middle of a hearing to seek clarity. This is, of course, outright corruption. But we tend to laugh it off and normalize it, and the press generally refuses to accurately label it corruption.There are endless parallels when it comes to the energy sector. Like this week, when Texas lawmakers were shocked to realize their recent state energy bill failed to require that Texas natural gas companies harden their infrastructure for climate change--despite the fact their own bill included giant loopholes to that effect.In the wake of the disastrous and deadly climate-related crisis in Texas last winter, the state passed several bills purporting to fix the problem. Many, like Senate Bill 3, largely just punted the can down the road, urging for a mapping of Texas's existing energy infrastructure, and giving the Texas Railroad Commission 180 days to finalize its weatherization rules. None of the solutions, of course, challenged entrenched energy providers, or tackled the core of the problem in Texas: an almost mindless deference to wealthy local energy executives.At a recent hearing in Texas, lawmakers blasted both the Texas Railroad Commission and local natural gas companies when they realized the latter had failed to weatherize their infrastructure with winter looming. The problem was that their own legislation provided the loopholes that made this possible:
If You Want To Know Why Section 230 Matters, Just Ask Wikimedia: Without It, There'd Be No Wikipedia
It sometimes seems that Techdirt spends half its time debunking bad ideas for reforming or even repealing Section 230. In fact, so many people seem to get the law wrong that Mike was moved to write a detailed post on the subject with the self-explanatory title "Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act". It may be necessary (and tiresome) work rebutting all this wrongness, but it's nice for a change to be able to demonstrate precisely why Section 230 is so important. A recent court ruling provides just such an example:
Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer
When Mike introduced our latest Greenhouse series on content moderation at the infrastructure layer, he made it abundantly clear this was a particularly thorny and complicated issue. While there's been a relentless focus on content moderation at the so-called "edge" of the internet (Google, Facebook, and Twitter), less talked about is content moderation at the "infrastructure" layers deeper in the stack. That can include anything from hosting companies and domain registrars, to ad networks, payment processors, telecom providers, and app stores.If and how many of these operations should be engaged in moderating content, and the peril of that participation being exploited and abused by bad actors and governments the world over, made this Greenhouse series notably more complicated than our past discussions on privacy, more traditional forms of content moderation, or broadband in the COVID era.We'd like to extend a big thank you to our diverse array of wonderful contributors to this panel, who we think did an amazing job outlining the complexities and risks awaiting policymakers on what's sure to be a long road forward:
Locast Shuts Down, As Yet Again A Bad Interpretation Of Copyright Law Makes The World Worse
A few weeks ago I woke up one day to find the Lake Tahoe region on fire and the New York region underwater. Meanwhile the Supreme Court had just upended decades if not centuries of Constitutional law. But I could learn about none of it from watching local news because Locast had shut down overnight following a dreadful decision by a district court a few days before.Locast was a service similar to the now-extinct Aereo, although with a few critical legal distinctions necessary for it to avoid Aereo's litigation-obliterated fate. But the gist was the same: it was another rent-an-antenna service that "captures over-the-air ('OTA') broadcast signals and retransmits them over the internet, enabling viewers to stream live television on their preferred internet-connected viewing device" [p. 1-2 of the ruling]. And, like Aereo, it is yet another useful innovation now on the scrapheap of human history.Absolutely nothing about this situation makes any sense. First, and least importantly, I'm not sure that Locast shutting down wasn't an overreaction to a decision so precariously balanced on such illusory support. Then again, no one wants to be staring down the barrel of potentially ruinous copyright lawsuit under the best of circumstances, but especially not when the judge has arbitrarily torn up all your high cards. Getting out of the game at least helps limit what the damage will be if the tide doesn't eventually turn.More saliently, it makes absolutely no sense that the plaintiffs, who were mostly some of the largest television networks, would even bring this lawsuit. Services like Locast are doing them a favor by helping ensure that their channels actually get watched. As I've pointed out before, the only reason I ever watch their affiliates is thanks to Locast. Like many others, I don't have my own cable subscription, nor my own antenna. So I need a service like Locast to essentially rent me one so that I can watch the over-the-air programming on the public airwaves I'd otherwise be entitled to see. Suing Locast for having rented me that antenna basically says that they don't actually want viewers. And that declaration should come as a shock to their advertisers, because the bottom line is that without services like Locast I’m not watching their ads.It also makes no sense for copyright law to want to discourage services like these. Not only are these public airwaves that people should be able to receive using whatever tools they choose, but cutting people off from this programming doesn't advance any of the ideals that copyright law exists to advance. Or, more practically, it deprives people of shared mass media sources and drives everyone instead towards more balkanized media we must find for ourselves online. With lawmakers increasingly concerned about people having to fend for themselves in building their media diets, it seems weird for law to effectively force them to. Especially after decades of policymaking deliberately designed to make sure that broadcast television could be a source of common culture, it would be a fairly radical shift for policy to suddenly obstruct that goal.As it turns out, though, Congress has not wanted to completely abandon bringing broadcast television to the public. Not even through copyright law, where there's actually a provision, at 17 U.S.C. Section 111(a)(5) ("Certain Secondary Transmissions Exempted"), that recognizes rebroadcasting services as something worth having and articulates the dimensions that such a service would have to meet to not run afoul of the rest of the copyright statute. The salient language:
...147148149150151152153154155156...