Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-10-04 23:47
Trump Announces His Own Social Network, 'Truth Social,' Which Says It Can Kick Off Users For Any Reason (And Already Is)
Last night, Donald Trump sent out a press release announcing (effectively) the launch of his new social network, "Truth Social." The press release shows that it's a bit more complicated than that. Trump is launching "Trump Media & Technology Group" which is entering into a reverse merger agreement to become listed as a public company in order to launch this new service. Apparently, Truth Social will let in "invited guests" next month, followed by a full launch in early 2022. The press release has the expected bombastically ridiculous quote from the former President.
Facebook AI Moderation Continues To Suck Because Moderation At Scale Is Impossible
For several years now, we've been beating the idea that content moderation at scale is impossible to get right, otherwise known as Masnick's Impossibility Theorem. The idea there is not that platforms shouldn't do any form of moderation, or that they shouldn't continue to try to improve the method for moderation. Instead, this is all about expectations setting, partially for a public that simply wants better content to show up on their various devices, but even more so for political leaders that often see a problem happening on the internet and assume that the answer is simply "moar tech!".Being an internet behemoth, Facebook catches a lot of heat for when its moderation practices suck. Several years ago, Mark Zuckerberg announced that Facebook had developed an AI-driven moderation program, alongside the claim that this program would capture "the vast majority" of objectionable content. Anyone who has spent 10 minutes on Facebook in the years since realizes how badly Facebook failed towards that goal. And, as it turns out, failed in both directions.By that I mean that, while much of our own commentary on all this has focused on how often Facebook's moderation ends up blocking non-offending content, a recent Ars Technica post on just how much hate speech makes its way onto the platform has some specific notes about how some of the most objectionable content is misclassified by the AI moderation platform.
Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018)
Summary: Snapchat debuted to immediate success a decade ago, drawing in millions of users with its playful take on instant messaging that combined photos and short videos with a large selection of filters and "stickers." Stickers are graphics that can be applied to messages, allowing users to punch up their presentations (so to speak).Snapchat’s innovations in the messaging space proved incredibly popular, moving Snapchat from upstart to major player in a few short years. It also created more headaches for moderators as sent messages soared past millions per day to billions.Continuing its expansion of user options, Snapchat announced its integration with Giphy, a large online repository of GIFs, in February 2018. This gave users access to Giphy's library of images to use as stickers in messages.But the addition of thousands of images to billions of messages quickly resulted in an unforeseen problem. In early March of 2018, Snapchat users reported a search of the GIPHY image database for the word "crime" surfaced a racist sticker, as reported by Josh Constine for TechCrunch:
Arlo Makes Live Customer Service A Luxury Option
The never-ending quest for improved quarterly returns means that things that technically shouldn't be luxury options, inevitably wind up being precisely that. We've shown how a baseline expectation of privacy is increasingly treated as a luxury option by hardware makers and telecoms alike. The same thing also sometimes happens to customer service; at least when companies think they can get away with it."Smart home" and home security hardware vendor Arlo, for example, has announced a number of new, not particularly impressive subscription tiers for its internet-connected video cameras. The changes effectively involve forcing users to pay more money every month if they ever want to talk to a live customer service representative. From Stacey Higginbotham:
Delta Proudly Announces Its Participation In The DHS's Expanded Biometric Collection Program
Via Travel & Leisure comes this warning -- one the online magazine has decided to portray as exciting news.
LinkedIn (Mostly) Exits China, Citing Escalating Demands For Censorship
Less than week from its horrendous decision to help China's censorship apparatus keep Chinese residents from accessing the accounts of American journalists, LinkedIn has announced it will no longer be offering the full-featured version of its quasi-social media platform in the country. (via the BBC)Specifically cited in senior vice president Morak Shroff's announcement is China's escalating censorship demands, albeit in a bit more non-specific terms. It also acknowledges Microsoft and LinkedIn made a calculated decision to do business with a government that had the power to shut it down (or run it off) if LinkedIn failed to satisfactorily acquiesce.
Daily Deal: The Python, Git, And YAML Bundle
The Python, Git, And YAML Bundle has 9 courses to help you learn all about Python, YAML, and Git. Five courses cover Python programming from the beginner level to more advanced concepts. Three courses cover Git and how to use it for your personal projects. The final course introduces you to the YAML fundamentals. The bundle is on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
British Telecom Wants Netflix To Pay A Tax Simply Because Squid Game Is Popular
For years telecom executives, jealous of internet services and ad revenue, have demanded that content and services companies pay them an extra toll for no reason. You saw this most pointedly during the net neutrality fracas, when AT&T routinely insisted Google should pay it additional money for no coherent reason. Telecom execs have also repeatedly claimed that Netflix should pay them more money just because. Basically, telecoms have tried to use their gatekeeper and political power to offload network investment costs to somebody else, and have spent literally the last twenty years using a range of incoherent arguments to try and justify it with varying degrees of success.While these efforts quieted down for a few years, they've popped back up recently thanks to, of all things, Netflix's Squid Game. In South Korea, ISPs have demanded that Netflix pay them more money because of the streaming demand the popular show places on their networks. As we noted then this makes no coherent sense, given ISPs build their networks to handle peak capacity load; what specific type of traffic causes that load doesn't particularly matter. It's just not how network engineering or common sense work.That's not stopping telecom executives around the world, of course. Across the pond, British Telecom Chief Executive Marc Allera has trotted out the same argument there, claiming that a surge in usage (during a pandemic, imagine that) is somehow Netflix's problem:
Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments
In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who've never used their devices to store or share illegal material. While encryption would still protect users' data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people's devices before allowing it to be sent to others.This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices' contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there's simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts (Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso). (via The Register)Here's how that slippery slope looks. Apple's client-side scanning may be targeted, utilizing hashes of known CSAM images, but once the process is in place, it can easily be repurposed.
MLB In Talks To Offer Streaming For All Teams' Home Games In-Market Even Without A Cable Subscription
Streaming options for professional and major college sports has long been a fascination of mine. That is in part because I'm both a fairly big fan of major sports and a fan of streaming over the wire instead of having cable television. My family cut the cord a couple of years back and hasn't looked back since, almost entirely satisfied with our decision. The one area of concern here continues to be being able to stream our local sports teams, as most of the pro sports leagues still have stupid local blackout rules. MLB.TV, the league's fantastic streaming service, has these rules too. While using a DNS proxy is trivially easy, easier would be the league coming to terms with modernity and ending the blackout rules. Notably, MLB did this in 2015 when it came specifically to Fox Sports broadcasts for 15 teams, but as I noted at the time:
Appeals Court Says Couple's Lawsuit Over Bogus Vehicle Forfeiture Can Continue
Another attempted government theft has been thwarted by the courts. The Ninth Circuit Appeals Court has ruled in favor of a couple whose vehicle was carjacked by Arizona law enforcement officers while their son used it for an extended road trip.Here's AZ Central's summary of the events leading to the lawsuit the Ninth has revived:
Techdirt Podcast Episode 301: Scarcity, Abundance & NFTs
We've got a cross-posted podcast for you this week! Recently, Mike appeared on the Ipse Dixit podcast with host Professor Brian L. Frye — the inspiration for our Plagiarism Collection of NFTs and, previously, our OK, Landlord gear — for a wide-ranging discussion about scarcity and abundance in the digital age. You can listen to the whole conversation on this week's episode.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Hollywood Is Betting On Filtering Mandates, But Working Copyright Algorithms Simply Don't Exist
Facebook whistleblower Frances Haugen may not have mentioned copyright in her Congressional testimony or television interviews, but her focus on artificial intelligence (“AI”) and content moderation have a lot of implications for online copyright issues.For the last decade, Hollywood, the music industry and others have been pushing for more technical solutions for online copyright infringement. One of the biggest asks is for Internet companies to “nerd harder” and figure out algorithms that can identify and remove infringing content. They claim content filters are the solution, and they want the law to force that onto companies.And they have been successful in parts of the world so far. For example, the recent European Union Copyright Directive placed a filtering mandate on internet platforms. Hollywood and the record labels are pushing the U.S. to follow suit and make platforms liable for copyright infringement by users. They want NIST to develop standards for filtering software, and they are using the power of Congress and the U.S. Copyright Office to push for legislation and/or voluntary agreements to create more filters.There is one huge problem with all of this: the technology does not exist to do this accurately. What the Facebook whistleblower made clear is that even the most sophisticated AI-based algorithms cannot accurately moderate content. They make tons of mistakes. Haugen even suggested that a huge part of the challenge is the belief that “nerding harder” will work. She blamed Facebook’s mantra to solve problems through technology as the main reason they are struggling with content moderation.Copyright presents a unique context challenge to algorithms. It’s not easy to automatically determine what is copyright infringement and what is not. Even under today’s existing systems, about a third of takedown requests are potentially problematic, requiring further analysis. Most of these erroneous takedowns are done by algorithms. This analysis can be extremely complicated even for the American judicial system – so much so that the Supreme Court recently had to clarify how to apply the four-part fair use test. In court, each fair use case gets a very individual, fact-based analysis. Current AI-based algorithms are not close to being able to do the needed analysis to determine copyright infringement in fair use cases.So why is there a big push from Hollywood, the movie industry and others on this? They are smart enough to know that algorithmic solutions are not close and may never be able to handle filtering for infringement accurately.The reason is they do not want filtering technologies to be accurate. They want filtering technologies to over-correct and take anything that might be infringing off the internet. Congress cannot directly legislate such an overcorrection, because it is a clear violation of the First Amendment. But they might be able to introduce legislation that creates a de facto mandatory filtering requirement. Mandatory filtering legislation imposed via changing the Digital Millennium Copyright Act Section 512’s platform liability regime would lead companies to “voluntarily” implement over-correcting filtering solutions — or otherwise face a constant barrage of losing lawsuits and legal bills for any and all alleged infringement by users. And this could create an end run around the first amendment if a court decided that the company was “voluntarily” implementing.At this point it is important to recognize the types of activity that we are talking about here: transformative works of creativity, pop-art, criticism and parody. This includes teens sharing lip sync TikToks and videos of your little kids dancing to a song. But fair use doesn’t apply to just the creative arts. It also includes collaborative efforts on an internet platform to develop cybersecurity solutions that require reverse engineering and allows teachers to share materials with students on online education platforms. Documentarians depend heavily on fair use, and efforts to distribute documentaries online would face stiff challenges.All of these important capabilities would be severely at risk if we forced filtering requirements onto internet platforms via threat of liability. If we let Hollywood and music industry elites and the Members of Congress who do their bidding get their way, the rest of America will lose out.Josh Lamel is the Executive Director of the Re:Create Coalition. This article was originally posted to the Re:Create Coalition blog.
Introducing The Techdirt Insider Discord
Join the Insider Discord with a Watercooler or Behind The Curtain membership!Techdirt has been around for nearly 25 years at this point, and we have an unfortunate habit of being just slightly too far ahead of the technology curve. The site was launched before the word blog even existed, and certainly before there were readily available and easy to use tools for creating a blog (more on that soon!), so we cobbled together our own solution. We've done that with unfortunate frequency. In the early 2000s, we even built our own internal RSS reader in order to find stories (I always thought it was better than Google Reader). And, a while back, we launched the Techdirt "Insider Chat" long before Discord or Slack or other such tools were popular.The Techdirt Insider Chat was a widget on our site that, if you supported us at certain levels in our own Insider Shop (or on Patreon), you got access to a chat that only those supporters could use -- but which was still displayed on the sidebar for anyone to see. Because there weren't widespread tools to make this possible, we built our own. But it was a bit clunky and limited, and honestly wasn't receiving that much use beyond a handful of dedicated users.Over the last few months, we've moved the Insider Chat over to Discord, which has become the standard these days for community chats. However, we did want to still include the feature of displaying the chat publicly -- but only allowing actual supporters to participate. So while we are now using Discord as the basis of the chat (which is much easier for many people to use, has many more features, and allows for things like accessing the chat on mobile devices), we built our own embeddable widget that reflects the chat in the sidebar (which you can see if you look over to the right).If you're interested in (1) supporting Techdirt and (2) joining in on the conversations now happening in the chat and (3) connecting with others in the Techdirt community, please consider supporting us at a level that includes the Insider Chat.As you'll recall, earlier this year we removed all the ads (and Google tracking code) from Techdirt. We are relying more and more on our community supporting us going forward, and we're working hard to provide those supporters with more useful and fun features, including this new Discord community.
Daily Deal: The Dynamic 2021 DevOps Training Bundle
Most software companies today employ extensive DevOps staff and engineers are in constant demand. In the Dynamic 2021 DevOps Training Bundle, Certs-School provides you with 5 courses to introduce you to the DevOps field, improve your skills, and then later excel as an actual practitioner. You will be introduced to DevOps tools and methodologies, GIT, CompTIA Cloud, Docker, and Ansible. Each course is self-paced so you can learn in your own time. It's on sale for $60.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Criminalizing Teens' Google Searches Is Just How The UK's Anti-Cybercrime Programs Roll
Governments sure seem to hate online advertisers and the platforms that profit from targeted advertising and tailored content algorithms. But they don't -- at least in this case -- have anything against engaging in exactly this sort of behavior if it helps them achieve their ends.In 2015, UK's National Crime Agency started a program called Cyber Choices, which was meant to steer young people away from being malicious hackers. Starting with the assumption that any form of hacking would ultimately result in malicious hacking, the NCA hoped to engage in interventions that would redirect this apparently unguided energy into something more productive and less harmful.The NCA's insistent belief that children are our (grimdark) future if left unattended, it started making stupid assertions, like claiming modding videogames was the gateway drug for black hat hackers. To steer curious youngsters away from malicious hacking, the NCA got into the targeted advertising business.
Canon Sued For Disabling Printer Scanners When Devices Run Out Of Ink
For more than a decade now, computer printer manufacturers have been engaged in an endless quest called: "let's be as annoying as humanly possible." That quest, driven by a desire to monopolize and boost the sale of their own printer cartridges, has resulted in all manner of obnoxious DRM and other restrictions designed to make using cheaper, third-party printing cartridges a monumental headache. Often, software or firmware updates have been designed to intentionally grind printing to a halt if you try to use these alternative options.Beyond that, there are other things printer manufacturers do that make even less sense if a happy customer is the end goal. Take for example Canon's history of disabling the scanning and faxing functionality on some printer models if the printer itself runs out of ink. It's a policy designed to speed up the rate at which users buy expensive cartridges (god forbid you go a few months just scanning things without adequate levels of magenta), but it's exemplary of the outright hostility that plagues the sector.And now Canon is facing a $5 million lawsuit (pdf) for its behavior. The lawsuit, filed in the District Court for the Eastern District of New York (first spotted by Bleeping Computer) claims Canon fails to adequately disclose the restrictions to consumers:
Copyright Law Discriminating Against The Blind Finally Struck Down By Court In South Africa
Most people would agree that those who are blind or visually impaired deserve all the help they can get. For example, the conversion of printed materials to accessible formats like Braille, large print, or Digitally Accessible Information System (DAISY) formats, ought to be easy. Who could possibly object? For years, many publishers did; and the reason – of course – is copyright. For example, publishers refused to allow Braille and other accessible editions to be shared between different countries:
LAPD Sees Your Reform Efforts, Raises You $20 Million In Bullets, Snacks, And Surveillance
The Los Angeles Police Department is reform-resistant. This isn't the same as reform-proof, but more separates "resistant" from "proof" in this case than the misleading labels promising varying degrees of water resistance placed on watches and cellphones.The LAPD has endured decades of bad press with barely an uptick in performance or community orientation. The LAPD is best known for beating minorities until riots happen. With a wave of police reform efforts sweeping the nation -- many of them looking to spend less on police violence and more on things that actually help the community -- the LAPD has issued a tone-deaf demand for more money to spend on things residents are complaining about.
Study Shows How Android Phones Still Track Users, Even When 'Opted Out'
We've frequently noted that what's often presented as "improved privacy" is usually privacy theater. For example researchers just got done showing how Apple's heavily hyped "do not track" button doesn't actually do what it claims to do, and numerous apps can still collect an parade of different data points on users who believe they've opted out of such collection. And Apple's considered among the better companies when it comes to privacy promises.Android is notably worse. One of my favorite privacy and adtech reporters is Shoshana Wodinsky, because she'll genuinely focus on the actual reality, not the promises. This week she wrote about how researchers at Trinity College in Dublin took a closer look at Android privacy, only to find that the term "opting out" often means absolutely nothing:
Court Tells Arkansas Troopers That Muting Anti-Cop Terms On Its Facebook Page Violates The 1st Amendment
When government entities use private companies to interact with the public, it can cause some confusion. Fortunately, this isn't a new problem with no court precedent and/or legal guidelines. For years, government agencies have been utilizing Twitter, Facebook, Instagram, etc. to get their message out to the public and (a bit less frequently) listen to their comments and complaints.Platforms can moderate content posted to accounts and pages run by public entities without troubling the First Amendment. Government account holders can do the same thing, but the rules aren't exactly the same. There are limits to what content moderation they can engage in on their own. A case involving former president Donald Trump's blocking of critics resulted in an Appeals Court decision that said this was censorship -- a form of viewpoint discrimination that violated these citizens' First Amendment rights.A decision [PDF] from a federal court in Arkansas arrives at the same conclusion, finding that a page run by local law enforcement engaged in unlawful viewpoint discrimination when it blocked a Facebook user and created its own blocklist of words to moderate comments on its page. (h/t Volokh Conspiracy)This case actually went in front of a jury, which made a couple of key determinations on First and Fourth Amendment issues. The federal court takes it from there to make it clear what government agencies can and can't do when running official social media accounts.Plaintiff James Tanner commented on the Arkansas State Police's Facebook page with a generic "this guy sucks" in response to news about the promotion of a state trooper. That post was removed -- then reinstated -- by the State Police.While that may have been a (temporary) First Amendment violation, the court says this act alone would not create a chilling effect, especially in light of the comment's reinstatement shortly after its deletion.However, the State Police took more action after Tanner contacted the page via direct message with messages that were far more direct. In response to the State Police's threat to ban him if he used any more profanity in his comments, Tanner stated: "Go Fuck Yourself Facist Pig." For that private message -- seen by no one but Tanner and Captain Kennedy, who handled moderation of the State Police page -- Tanner was blocked. Kennedy compared the block of Tanner as the equivalent of "hanging up" on a rude caller.The court disagrees. It's not quite the same thing. "Hanging up" on someone terminates a single conversation. What happened here was more analogous to subjecting Tanner to a restraining order that forbade him from speaking to state troopers or about them.
New Research Shows Social Media Doesn't Turn People Into Assholes (They Already Were), And Everyone's Wrong About Echo Chambers
We recently wrote about Joe Bernstein's excellent Harper's cover story, which argues that we're all looking at disinformation/misinformation the wrong way, and that the evidence of disinformation on social media really influencing people is greatly lacking. Instead, as Bernstein notes, this idea is one that many others are heavily invested in spreading, including Facebook (if the disinfo story is true, then you should buy ads on Facebook to influence people in other ways), the traditional media (social media is a competitor), and certain institutions with a history of having authority over "truth" (can't let the riffraff make up their own minds on things).We've also seen other evidence pop up questioning the supposed malicious impact of social media. Yochai Benkler's work has shown that Fox News has way more of an impact on spreading false information than social media does.And even with all this evidence regarding disinformation, there are also people who focus on attitude, and insist that social media is responsible for otherwise good people turning awful. Yet, as was covered in an fascinating On the Media interview with Professor Michael Bang Petersen, there really isn't much evidence to support that either! As Petersen explained in a useful Twitter thread, his research has shown that there's no real evidence to support the idea that social media turns people hostile. Instead, it shows that people who are assholes offline are also assholes online.But in the interview, Petersen makes a really fascinating point regarding echo chambers. I've been skeptical about idea of online echo chambers in the past, but Petersen says that people really have it all backwards -- and that we're actually much more likely to live in echo chambers offline than online, and we're much more likely to come across different viewpoints online.
Daily Deal: The 2021 Complete Video Production Super Bundle
The 2021 Complete Video Production Super Bundle has 10 courses to help you learn all about video production. Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about this in-depth video production bundle. Video content is fast changing from the future marketing tool to the present, and here you'll learn how to make professional videos on any budget. From the absolute basics, to screenwrighting to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high quality video content. You'll learn how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Apple Gives Chinese Government What It Wants (Again); Pulls Quran App From Chinese App Store
Apple has generally been pretty good about protecting users from government overreach, its recent voluntary (and misguided) foray into client-side scanning of users' images notwithstanding. But that seemingly only applies here in the United States, which is going to continue to pose problems for Apple if it chooses to combat local overreach while giving foreign, far more censorial governments greater and greater control.Like many other tech companies, Apple has no desire to lose access to one of the largest groups of potential customers in the world. Hence its deference to China, which has seen the company do things like pull the New York Times app in China following the government's obviously bullshit claim that the paper was a purveyor of "fake news."Since then, Apple has developed an even closer relationship with the Chinese government, which culminated in the company opening data centers in China to comply with the government's mandate that all foreign companies store Chinese citizens' data locally where it's much easier for the government to demand access.On a smaller scale, Apple pulled another app -- one that encrypted text messages on platforms that don't provide their own encryption -- in response to government demands. Once again, Apple left Chinese citizens at the mercy of their government, apparently in exchange for the privilege of selling them devices that promised them security and privacy while actually offering very little of either.The latest acquiescence by Apple will help the Chinese government continue its oppression of the country's Uighur minority -- Muslim adherents that have been subjected to religious persecution for years. Whoever the government doesn't cage, disappear, or genocide into nonexistence will see nothing but the bottom of a jackboot for years to come. Apple is aiding and abetting the jackboot, according to this report by the BBC.
Many Digital Divide 'Solutions' Make Privacy And Trust A Luxury Option
We've noted a few times how privacy is slowly but surely becoming a luxury good. Take low-cost cellular phones, for example. They may now be available for dirt cheap, but the devices are among the very first to treat consumer privacy and security as effectively unworthy of consideration at that price point. So at the same time we're patting ourselves on the back for "bridging the digital divide," we're creating a new paradigm whereby privacy and security are something placed out of reach for those who can't afford it.A similar scenario is playing out on the borrowed school laptop front. Lower income students who need to borrow a school laptop to do their homework routinely find that bargain comes with some monumental trade offs; namely zero expectation of privacy. Many of the laptops being used by lower-income students come with Securly, student-monitoring software that lets teachers see a student’s laptop screen in real time and even close tabs if they discover a student is "off-task."But again, it creates a dichotomy between students with the money for a laptop (innately trusted) and lower income students who are inherently tracked and surveilled:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is PaulT with a response to a complaint about vaccinations:
This Week In Techdirt History: October 10th - 16th
Five Years AgoThis week in 2016, everyone was abuzz about the infamous Trump Access Hollywood recording that had dropped the previous Friday, and we learned about how NBC had delayed a story about it for fear of getting sued — after all, Trump was tossing around the legal threats to newspapers with wild abandon. At the same time, Charles Harder said he was no longer monitoring Gawker (though he was still sending takedown demands), but he was sending out a threat letter on behalf of Melania Trump. We also got some more details on the recent spate of bogus defamation lawsuits being used to block negative reviews.Ten Years AgoThis week in 2011, German collection society GEMA was demanding fees for music it didn't hold the rights to while the Pirate Party was continuing to build support, taking 9% of the vote nationwide in Germany. A Belgian court ordered the blocking of the wrong PirateBay domain, the UK government was admitting it had no evidence for its plans for draconian copyright law, and we wondered why PROTECT IP supporters couldn't just admit the bill was about censorship (while Yahoo was quietly dumping the US Chamber of Commerce over its extremist position on PROTECT IP).Fifteen Years AgoThis week in 2006, the big rumors of the previous week became official when Google acquired YouTube for $1.65-billion in Google stock, which of course led to all kinds of varied opinions on the news and a renewed interest from entertainment companies in threatening to sue... and/or negotiate. Anti-video-game crusader Jack Thompson somehow convinced a judge that he should get to see the entirety of the game Bully before it was released, only to have his hopes of declaring it a public nuisance quickly dashed. We were shocked to see a Disney executive actually admit that piracy is competition, baffled to hear a Sony Pictures UK executive claim that getting rid of release windows was "not technically possible", and amused to see the Christian music industry start making a fuss about piracy as a moral issue.
Trader Joe's Threatens Man Over Parody 'Traitor Joe' Political T-Shirt
The last time we found niche grocery chain Trader Joe's playing intellectual property bully, it was over one enterprising Canadian man who drove across the border, bought a bunch of good stuff from Trader Joe's, and then resold it at his Canadian store called "Pirate Joe's". While that whole setup is entertaining, Trader Joe's sued for trademark infringement in the United States, which made zero sense. The store was in Canada, not the States, reselling purchased items is not trademark infringement, and Trader Joe's was free to open up Canadian stores if it chose.Fast forward to the present and Trader Joe's is trying to stretch trademark law yet again, this time to go after one man's website that is selling parody t-shirts with a picture of Joe Biden and the moniker "Traitor Joe", all mocked up to look like the store logo. Trader Joe's sent a threat letter to the man, Dan McCall, who was represented by friend of the site Paul Alan Levy.
Study Says Official Count Of Police Killings Is More Than 50% Lower Than The Actual Number
In 2019, the FBI claimed to be compiling the first-ever database of police use of force, including killings of citizens by officers. It was, of course, not the first-ever database of police killings. Multiple databases have been created (some abandoned) prior to this self-congratulatory announcement to track killings by police officers.What this database would have, however, is information on use of force, which most private databases didn't track. Whether or not it actually does contain this info is difficult to assess, since the FBI's effort does not compile these reports in any easily-accessible manner, nor does it provide readable breakdowns of the data -- something it does for other things, like crimes against police officers.It also does not have the participation of every law enforcement agency in the nation, which prevents the FBI from collecting all relevant information. It's also voluntary, so even participating agencies are free to withhold incident reports, keeping their own official use-of-force/killing numbers lower than what they actually may be.The problem with underreporting traces back decades, though. The official count of police killings has always been lower than data compiled by non-government databases, which rely almost solely on open-source information like news reports. It would seem the numbers reported by the FBI would be higher, since it theoretically has access to more info, but the FBI's count has repeatedly been lower than outside reporting.A recent study published by The Lancet says the official numbers are wrong. And they're off by a lot. Utilizing outside databases compiled by private citizens/entities and data obtained from the USA National Vital Statistics System (NVSS), the researchers have reached the conclusion that law enforcement self-reporting has resulted in undercounting the number of killings by officers by thousands over the past four decades.
Study Says Official Count Of Police Killings Is More Than 50% Lower Than The Actual Number
In 2019, the FBI claimed to be compiling the first-ever database of police use of force, including killings of citizens by officers. It was, of course, not the first-ever database of police killings. Multiple databases have been created (some abandoned) prior to this self-congratulatory announcement to track killings by police officers.What this database would have, however, is information on use of force, which most private databases didn't track. Whether or not it actually does contain this info is difficult to assess, since the FBI's effort does not compile these reports in any easily-accessible manner, nor does it provide readable breakdowns of the data -- something it does for other things, like crimes against police officers.It also does not have the participation of every law enforcement agency in the nation, which prevents the FBI from collecting all relevant information. It's also voluntary, so even participating agencies are free to withhold incident reports, keeping their own official use-of-force/killing numbers lower than what they actually may be.The problem with underreporting traces back decades, though. The official count of police killings has always been lower than data compiled by non-government databases, which rely almost solely on open-source information like news reports. It would seem the numbers reported by the FBI would be higher, since it theoretically has access to more info, but the FBI's count has repeatedly been lower than outside reporting.A recent study published by The Lancet says the official numbers are wrong. And they're off by a lot. Utilizing outside databases compiled by private citizens/entities and data obtained from the USA National Vital Statistics System (NVSS), the researchers have reached the conclusion that law enforcement self-reporting has resulted in undercounting the number of killings by officers by thousands over the past four decades.
The Surveillance And Privacy Concerns Of The Infrastructure Bill's Impaired Driving Sensors
There is no doubt that many folks trying to come up with ways to reduce impaired driving and making the roads safer have the best of intentions. And yet, hidden within those intentions can linger some pretty dangerous consequences. For reasons that are not entirely clear to me, the giant infrastructure bill (that will apparently be negotiated forever) includes a mandate that automakers would eventually need to build in technology that monitors whether or not drivers are impaired. It's buried deep in the bill (see page 1066), but the key bit is:
GOP Very Excited To Be Handed An FCC Voting Majority By Joe Biden
Consumer groups have grown all-too-politely annoyed at the Biden administration's failure to pick a third Democratic Commissioner and permanent FCC boss nearly eight months into his term. After the rushed Trump appointment of unqualified Trump ally Nathan Simington to the agency (as part of that dumb and now deceased plan to have the FCC regulate social media), the agency now sits gridlocked at 2-2 commissioners under interim FCC head Jessica Rosenworcel.While the FCC can still putter along tackling its usual work on spectrum and device management, the gridlock means it can't do much of anything controversial, like reversing Trump-era attacks on basic telecom consumer protections, media consolidation rules, or the FCC's authority to hold telecom giants accountable for much of, well, anything. If you're a telecom giant like AT&T or Comcast, that's the gift that just keeps on giving.More interesting perhaps is the fact that interim FCC boss Jessica Rosenworcel, whose term expires at the end of the year, hasn't had her term renewed either. That means there's an increasingly real chance the GOP enjoys a 2-1 voting majority at Biden's FCC in the new year:
Plagiarism By Techdirt: Our Plagiarized NFT Collection Can Now Actually Be Bid On
Place your bids on the Plagiarism NFT Collection by Techdirt »A few weeks ago, we wrote about our latest experiment with NFTs (which is part of the research we're doing into NFTs for a deep dive paper I'm working on). There's a very long explanation to explain the NFTs in question and why we're plagiarizing Prof. Brian Frye (but making them much, much cooler). But, after we posted that, we discovered one little problem. The platform that we were using, OpenSea (the most popular and user friendly NFT marketplace)... didn't work. At least not for us. We've spent 3 weeks asking OpenSea to fix things and last night they finally figured out the problem, so that you can now (finally) actually bid in the open auction for our plagiarized set of NFTs about plagiarism.There are tons of reasons to back them -- some good, some less good -- but at the very least, it will help support Techdirt, it will show that culture works by building on those who came before, not by locking up content, and it will let you experiment with NFTs if you haven't already. Also, it'll let you show how maybe people shouldn't freak out over plagiarism all the time -- and when else do you have a chance to do that?The entire collection can be seen here, and they do look amazing, if I do say so myself.
Daily Deal: The Complete 2022 Microsoft Office Master Class Bundle
The Complete 2022 Microsoft Office Master Class Bundle has 14 courses to help you learn all you need to know about MS Office products to help boost your productivity. Courses cover SharePoint, Word, Excel, Access, Outlook, Teams, and more. The bundle is on sale for $75.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Clearview Celebrates 10 Billion Scraped Images Collected, Claims It Can Now Recognize Blurred, Masked Faces
Clearview's not going to let several months of bad press derail its plans to generate even more negative press. The facial recognition tech company that relies on billions of scraped images from the web to create its product is currently being sued in multiple states, has had its claims about investigative effectiveness repeatedly debunked and, most recently, served (then rescinded) a subpoena to transparency advocacy group Open the Government demanding information on all its Clearview-related FOIA requests as well as its communications with journalists.I don't know what Clearview is doing now. Maybe it thinks it can still win hearts and minds by not only continuing to exist but also by getting progressively worse in terms of integrity and corporate responsibility. Whatever it is that Clearview's doing to salvage its reputation looks to be, at best, counterproductive. I mean, the only way Clearview could get worse is by getting bigger, which is exactly what it's done, according to this report by Will Knight for Wired.
Journalists In St. Louis Discover State Agency Is Revealing Teacher Social Security Numbers; Governors Vows To Prosecute Journalists As Hackers
Last Friday, Missouri's Chief Information Security Officer Stephen Meyer stepped down after 21 years working for the state to go into the private sector. His timing is noteworthy because it seems like Missouri really could use someone in their government who understands basic cybersecurity right now.We've seen plenty of stupid stories over the years about people who alert authorities to security vulnerabilities then being threatened for hacking, but this story may be the most ridiculous one we've seen. Journalists for the St. Louis Post-Dispatch discovered a pretty embarrassing leak of private information for teachers and school administrators. The state's Department of Elementary and Secondary Education (DESE) website included a flaw that allowed the journalists to find social security numbers of the teachers and administrators:
Billy Mitchell Survives Anti-SLAPP Motion From Twin Galaxies A Second Time
The Billy Mitchell and Twin Galaxies saga rolls on, it seems. Mitchell has made it onto our pages several times in the past, most recently over a lawsuit filed against gaming record keepers Twin Galaxies over its decision to un-award his high score record for Donkey Kong on allegations he achieved it on an emulator instead of an official cabinet. The suit is for defamation and Twin Galaxies initially tried to get the case tossed on anti-SLAPP grounds, but the court denied that request under the notion that Mitchell only has to show "minimal merit" in the overall case to defeat the anti-SLAPP motion.And now, on appeal, California's Second Appellate court has affirmed that ruling, again on "minimal merit" grounds. You can read the entire ruling embedded below, though I warn you that there are many pages dedicated to the back and forth between Mitchell and Twin Galaxies over a video game record, so you may come away with sore eyebrows from rolling your eyes so hard at all of this. There is also a metric ton of context as to how the court is supposed to apply the anti-SLAPP statute. Go nerd out if you like, but the whole ruling boils down to this:
Content Moderation Case Study: Tumblr's Approach To Adult Content (2013)
Summary: There are unique challenges in handling adult content on a website, whether it’s an outright ban, selectively allowed, cordoned off under content warnings, or (in some cases) actively encouraged.Tumblr’s early approaches to dealing with adult content on its site is an interesting illustration in the interaction between user tagging and how a site’s own tools interact with such tags.Tumblr was launched in 2007 as a simple “blogging” platform that was quick and easy to setup, but would allow users to customize it however they wanted, and use their own domain names. One key feature of Tumblr that was different from other blogs was an early version of social networking features — such as the ability to “follow” other users and to then see a feed of those users you followed. While some of this was possible via early RSS readers, it was both technologically clunky and didn’t really have the social aspect of knowing who was following you or being able to see both followers and followees of accounts you liked. Tumblr was also an early pioneer in reblogging — allowing another user to repost your content with additional commentary.Because of this more social nature, Tumblr grew quickly among certain communities. This included communities focused on adult content. In 2013, it was reported that 11.4% of Tumblr’s top domains were for adult content. In May of 2013, Yahoo bought Tumblr for $1.1 billion, with an explicit promise not to “screw it up.” Many people raised concerns about how Yahoo would handle the amount of adult content on the site, but the company’s founder, David Karp, insisted that they had no intention of limiting such content.
Court Says Google Translate Isn't Reliable Enough To Determine Consent For A Search
The quickest way to a warrantless search is obtaining consent. But consent obtained by officers isn't always consent, no matter how it's portrayed in police reports and court testimony. Courts have sometimes pointed this out, stripping away ill-gotten search gains when consent turned out to be [extremely air quotation marks] "consent."Such is the case in this court decision, brought to our attention by FourthAmendment.com. Language barriers are a thing, and it falls on officers of the law to ensure that those they're speaking with understand clearly what they're saying, especially when it comes to actions directly involving their rights.It all starts with a stop. A pretextual one at that, as you can see by the narrative recounted by the court.
University Of Hong Kong Wants To Remove A Sculpture Commemorating Tiananmen; To Preserve It, People Have Crowdsourced A Digital 3D Replica
As Techdirt has chronicled, the political situation in Hong Kong becomes worse by the day, as the big panda in Beijing embraces a region whose particular freedoms were supposed to be guaranteed for another 25 years at least. One manifestation of the increasing authoritarianism in Hong Kong is growing censorship. The latest battle is over a sculpture commemorating pro-democracy protesters killed during China's 1989 crackdown in Tiananmen Square, and on display in the University of Hong Kong. South China Morning Post reports:
Prosecutors Drop Criminal Charges Against Fake Terrorist Who Duped Canadian Gov't, NYT Podcasters
For a couple of years, a prominent terrorist remained untouched by Canadian law enforcement. Abu Huzayfah claimed to have traveled to Syria in 2014 to join the Islamic State. A series of Instagram posts detailed his violent acts, as did a prominent New York Times Peabody Award-winning podcast, "Caliphate."But Abu Huzayfah, ISIS killer, never existed, something the Royal Canadian Mounted Police verified a year before the podcast began. Despite that, Ontario resident Shehroze Chaudhry -- who fabricated tales of ISIS terrorist acts -- remained a concern for law enforcement and Canadian government officials, who believed his alter ego was not only real, but roaming the streets of Toronto.All of this coalesced into Chaudhry's arrest for the crime of pretending to be a terrorist. Chaudry was charged with violating the "terrorism hoax" law, which is a real thing, even though it's rarely used. Government prosecutors indicated they intended to argue Chaudhry's online fakery caused real world damage, including the waste of law enforcement resources and the unquantifiable public fear that Ontario housed a dangerous terrorist.Chaudry was facing a possible sentence of five years in prison, which seems harsh for online bullshit, but is far, far less than charges of actual terrorism would bring. But it appears everything has settled down a bit and the hoaxer won't be going to jail for abusing the credulity of others, a list that includes Canadian government officials and New York Times podcasters.
Broadband Data Caps Mysteriously Disappear When Competition Comes Knocking
We've noted for years how broadband data caps (and monthly overage fees) are complete bullshit. They serve absolutely no technical function, and despite years of ISPs trying to claim they "help manage network congestion," that's never been remotely true. Instead they exist exclusively as a byproduct of limited competition. They're a glorified price hike by regional monopolies who know they'll see little (or no!) competitive or regulatory pressure to stop nickel and diming captive customers.The latest case in point: Cox Communications employs a 1,280 GB data cap, which, if you go over, requires you either pay $30 per month more for an additional 500 GB, or upgrade your plan to an unlimited data offering for $50 more per month. While Cox's terabyte-plus plan is more generous than some U.S. offerings (which can be as low as a few gigabytes), getting caught up in whether the cap is "fair" is beside the point. Because, again, it serves absolutely no function other than to impose arbitrary penalties and additional monthly costs for crossing the technically unnecessary boundaries.And, mysteriously, when wireless broadband providers begin offering fixed wireless services over 5G services in limited areas, Cox lifts the restrictions completely to compete:
Public Backlash Leads Tulsa Park To Stop Bullying Coffee Shop Over Trademark
A good public outcry and backlash can lead to many, many good things. We see it here at Techdirt all the time, particularly when it comes to aggressive bullying episodes over intellectual property. Some person or company will try to play IP bully against some victim, the public gets wind of it and throws a fit, and suddenly the necessity over the IP action goes away. Retailers, manufacturers, breweries: public outcry is a great way to end ridiculous legal actions.A recent example of this comes out of Tulsa, OK, where a riverside park of all places decided it had to sue a coffee shop over a similar, if fairly generic, name. Gathering Place is a park in Tulsa, a... place... where people... you know... gather. The Gathering Place is a coffee shop in Shawnee, 90 miles from Tulsa, where people get coffee and, I imagine, occasionally gather. But despite any gathering similarities, coffee shops are not parks and 90 miles is a fairly long way away. Which makes a lawsuit over trademark infringement brought by the park very, very strange.
LinkedIn Caves Again, Blocks US Journalists' Accounts In China
LinkedIn -- the business-oriented social media platform owned by Microsoft -- has spent the last few years increasing its compliance with the Chinese government's demands for censorship. A couple of years back, the network drew heat for not only blocking accounts of Chinese pro-democracy activists but also critics of the government located elsewhere in the world.The blocking only occurred in China, but that was enough to cause PR trouble for LinkedIn, which restored some of the accounts following some deserved backlash. The Chinese government didn't care much for LinkedIn's temporary capitulations so it turned up the heat. After failing to block enough content, the Chinese government ordered LinkedIn's local office to perform a self-audit and report on its findings to the country's internet regulator. It was also blocked from signing up any new Chinese citizens for 30 days.The pressure appears to have worked. China is again asking for censorship of voices it doesn't like. And, again, LinkedIn is complying. Here's the report from Bethany Allen-Ebrahimian of Axios, who was one of those targeted by the latest round of account blocking.
Prudish Mastercard About To Make Life Difficult For Tons Of Websites
For all the attention that OnlyFans got for its shortlived plan to ban sexually explicit content in response to "pressures" from financial partners, as we've discussed, it was hardly the only website to face such moderation pressures from financial intermediaries. You can easily find articles from years back highlighting how payment processors were getting deeply involved in forcing website to moderate content.And the OnlyFans situation wasn't entirely out of nowhere either. Back in April we noted that Mastercard had announced its new rules for streaming sites, and other sites, such as Patreon, have already adjusted their policies to comply with Mastercard's somewhat prudish values.However, as those new rules that were announced months ago are set to become official in a few days, the practical realities of what Mastercard requires are becoming clear, and it's a total mess. Websites have received "compliance packages" in which they have to set up a page to allow reports for potential abuse. In theory, this sounds reasonable -- if there really is dangerous or illegal activity happening on a site, making it easier for people to report it makes sense. But some of it is highly questionable:
In Latest Black Eye For NSO Group, Dubai's King Found To Have Used NSO Spyware To Hack His Ex-Wife's Phone
NSO Group has endured some particularly bad press lately, what with leaked data pointing to its customers' targeting of journalists, political figures, religious leaders, and dissidents. That its powerful spyware would be abused by its customers was not surprising. Neither were the findings from the leaked data, which only confirmed what was already known.Despite this, NSO continues to make contradictory claims. First, it says it has no control (or visibility) as to how its customers use its products -- customers that include some notorious abusers of human rights. Second, it says that it cuts off customers who abuse its products to target people who only annoy their governments, rather than directly threaten it with criminal or terrorist acts.Well, it's either one or the other. And if NSO is waiting for secondhand reports about abusive deployments to act, it really shouldn't be in the intel business. If NSO wants to stay above the fray, it could start by being a lot more selective about who it sells to.If you're not selective, your customers will not only pettily target people (critics, activists, journalists, dissidents) the government doesn't like but will move on to the extreme pettiness of targeting people certain government officials don't like.This latest nadir for NSO Group comes courtesy of court proceedings, which illustrates the danger of putting powerful cellphone exploits in the hands of the wrong people.
Facebook's Nick Clegg Makes It Clear: If You're Looking To Undermine Section 230, That's EXACTLY What Facebook Wants
Facebook policy dude/failed UK politician Nick Clegg has written an op-ed for USA Today confirming what has been obvious to everyone who understands Section 230, but (for reasons I don't quite understand) seems obscured from basically every politician out there: >Facebook wants to destroy Section 230. And it's practically giddy that politicians are so eager to grant it its wish, while pretending that doing so will somehow hurt Facebook.It remains absolutely bizarre to me that many people still believe that getting rid of Section 230 (or even reforming it) is a way to "stop" or "hurt" Facebook. Section 230 is a protection for the users of the internet more than it is for the companies. By making it clear that companies are not liable for user speech, it makes more websites willing to host user speech, especially smaller ones which could easily be sued out of existence. Indeed, over the last couple of years, it's become clear that Facebook desperately wants to kill Section 230 because it knows that it alone has enough money to handle the liability, and removing Section 230 will really only burden the startups that threaten to take users away from Facebook.A year and a half ago, Mark Zuckerberg made it clear that he was cool with getting rid of Section 230. Earlier this year, he suggested a "reform proposal" that was effectively gutting 230 in extremely anti-competitive ways. And for months now, Facebook has blanketed DC (and elsewhere, but mostly DC) with commercials and ads that don't say Section 230, but refer obliquely to "comprehensive internet regulations" passed in "1996." That's Section 230 that they're talking about.This is why it's so ridiculous that the takeaway of some people to the Facebook whistleblower last week was that Section 230 needs to change. That's exactly what Facebook wants, because it will cement Facebook's dominant position and make it that much more difficult for competitors to emerge or succeed.Here's what Clegg had to say:
Daily Deal: The 2022 Adobe Creative Cloud Training Bundle
The 2022 Adobe Creative Cloud Training Bundle has 7 courses to help you learn all about Adobe's creative software suite. You'll learn how to design apps in Adobe XD, how to master Photoshop, how to use animation in After Effects, how to design projects in Illustrator, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Google, Amazon, And Microsoft Are Using Third Party Companies To Sell Surveillance Tech To ICE, CBP
A few years ago, tech companies stood up to the US government, issuing statements objecting to immigration policies instituted by the Trump Administration and, in some cases, threatening to pull contracts with ICE (Immigration and Customs Enforcement) and the CBP (Customs and Border Protection).It wasn't much of a stand, however. And whatever statements were issued by companies like Google, Microsoft, and Amazon were mainly prompted by hundreds of employees who wished to work for companies that didn't aid and abet in civil liberties violations, and ongoing mistreatment of immigrants and their families.Whatever statements came out of the front end of these companies haven't been matched by the backend. According to a new report by Caroline Haskins for Business Insider, Google, Microsoft, and Amazon are still selling plenty of tech and software to ICE and CBP. They're just getting better at hiding it. (Alt link)
Charter Spectrum Threatens To Ruin Potential Customers Over Debt They Don't Owe
There's a reason U.S. cable and broadband companies have some of the worst customer satisfaction ratings of any companies, in any industry in America. The one/two punch of lagging broadband competition and captured regulators generally mean there's little to no meaningful penalty for overcharging users, providing lackluster services and support, and generally just being an obnoxious ass.Case in point: a new Charter (which operates under the Spectrum brand) marketing effort apparently involves threatening to ruin the credit scores of ex-customers unless they re-subscribe to the company's services. It basically begins with a letter that threatens ex-users that they'll be reported to debt collectors unless they sign up for service. It proceeds to inform them the letter is a "one-time courtesy" allowing them to sign up for cable or broadband service before the debt collector comes calling:
...152153154155156157158159160161...