![]() |
by Mike Masnick on (#5VRMA)
I really wasn't going to write anything about the latest Spotify/Joe Rogan/Neil Young thing. We've posted older case studies about content moderation questions regarding Rogan and Spotify and we should have an upcoming guest post exploring one angle of the Rogan/Young debate that is being worked on.However, because it's now come up a few times, I did want to address one point and do a little explainer post: Spotify's decisions about Rogan (and Young and others) has absolutely nothing to do with Section 230. At all.Now, we can blame Spotify a bit for people thinking it does, because (for reasons I do not understand, and for which both its lawyers and its PR people should be replaced), Spotify has tried to make this about "content moderation." Hours after Spotify's internal "content policy" leaked, the company put out a blog post officially releasing the policy... that had already leaked.And, when you're talking about "content policy" it feels like the same old debates we've had about content moderation and trust and safety and "user generated content" websites and whatnot. But the decision to keep Rogan on the platform has nothing, whatsoever, to do with Section 230. The only issue for Section 230 here is if Rogan did something that created an underlying cause of action -- such as defamation -- then, there might be a Section 230 issue if the defamed individual chose to sue Spotify. Spotify could then use Section 230 to get dismissed from the lawsuit, though the plaintiff could still sue Rogan. (If you want an analogous case, years back, AOL was sued over something Matt Drudge wrote -- after AOL had licensed the Drudge Report in order to distribute it to AOL users -- and the court said that Section 230 protected AOL from a lawsuit -- thought not Drudge himself).The thing is, no one (that I can find at least) is alleging any actual underlying cause of action against Rogan here. They're just arguing that somehow Section 230 is to blame for Spotify's decision to keep Rogan on their platform.But the question of Spotify's decision to keep Rogan or not has nothing to do with Section 230 at all. Spotify has every right to decide whether or not to keep Rogan in the same manner that a book publisher gets to decide whether or not they'll publish a book by someone. And that right is protected by the 1st Amendment. If someone sued Spotify for "hosting Joe Rogan," Spotify would win easily, not using Section 230, but for failure to state any actual claim, backed up by the 1st Amendment right of Spotify to work with whatever content providers they want (and not work with ones they don't).Unfortunately, Spotify's founder Daniel Ek made matters even dumber yesterday by pulling out the mythical and entirely non-existent "platform/publisher" divide:
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-10-04 18:32 |
![]() |
by Daily Deal on (#5VRHM)
Looking for a great dash cam that records well in low light? Check out the GoSafe S780. With its revolutionary Sony Starvis sensor, the S780 delivers remarkable performance in those tricky dusk driving situations. Plus, thanks to its dual-channel system, you can record both the front and rear of your vehicle at the same time. It's on sale for $200.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5VRFK)
Over the last few months, I've been asking a general question which I don't know the answer to, but which I think needs a lot more research. It gets back to the issue of how much of the "bad" that many people seem to insist is caused by social media (and Facebook in particular) is caused by social media, and how much of it is just shining a light on what was always there. I've suggested that it would be useful to have a more nuanced account of this, because it's become all too common for people to insist that anything bad they see talked about on social media was magically caused by social media (oddly, traditional media, including cable news, rarely gets this kind of treatment). The reality, of course, is likely that there are a mix of things happening, and they're not easily teased apart, unfortunately. So, what I'd like to see is some more nuanced accounting of how much of the "bad stuff" we see online is (1) just social media reflecting back things bad things that have always been there, but which we were less aware of as opposed to (2) enabled by social media connecting and amplifying people spreading the bad stuff. On top of that, I think we should similarly be comparing how social media also has connected tons of people for good purposes as well -- and see how much of that happens as compared to the bad.I'm not holding my breath for anyone to actually produce this research, but I did find a recent Charlie Warzel piece very interesting, and worth reading, in which he suggests (with some interesting citations), that social media disproportionately encourages the miserable to connect with each other and egg each other on. It's a very nuanced piece that does a good job highlighting the very competing incentives happening, and notes that part of the reason there's so much garbage online is that there's tremendous demand for it:
|
![]() |
by Karl Bode on (#5VR3T)
For years we've noted how broadband providers impose all manner of bullshit fees on your bill to drive up the cost of service post sale. They've also historically had a hard time being transparent about what kind of broadband connection you're buying. As was evident back when Comcast thought it would be a good idea to throttle all upstream BitTorrent traffic (without telling anybody), or AT&T decided to cap and throttle the usage of its "unlimited" wireless users (without telling anybody), or Verizon decided to modify user packets to track its customers around the internet (without telling anybody).Maybe you see where I'm going with this.Back in 2016 the FCC eyed the voluntary requirement that broadband providers be required to provide a sort of "nutrition label" for broadband. The idea was that this label would clearly disclose speeds, throttling, limitation, sneaky fees, and all the stuff big predatory ISPs like to bury in their fine print (if they disclose it at all). This was the example image the FCC circulated at the time:While the idea was scuttled by the Trump administration, Congress demanded the FCC revisit it as part of the recent infrastructure bill. So the Rosenworcel FCC last week, as instructed by Congress, voted 4-0 to begin exploring new rules:
|
![]() |
by Timothy Geigner on (#5VQP6)
A couple of weeks back we asked the question: is the video game industry experiencing an age of hyper-consolidation? The answer to that increasingly looks to be "yes". That post was built off of a pair of Microsoft acquisitions of Zenimax for $7 billion and then a bonkers acquisition of Activision Blizzard King for roughly $69 billion. Whereas consolidations in industries are a somewhat regular thing, what caused my eyes to narrow was all of the confused communications coming out of Microsoft as to how the company would handle these properties when it came to exclusivity on Microsoft platforms. It all went from vague suggestions that the status quo would be the path forward to, eventually, the announcement that some (many?) titles would in fact be Microsoft exclusives.So, back to my saying that consolidation does seem to be the order of the day: Sony recently announced it had acquired game studio Bungie for $3.6 billion.
|
![]() |
by Tim Cushing on (#5VQJ8)
Israeli malware purveyor NSO Group may want to consider changing its company motto to "No News Is Good News." The problem is there's always more news.The latest report from Calcalist shows NSO is aiding and abetting domestic abuse. No, we're not talking about the king of Dubai deploying NSO's Pegasus spyware to keep tabs on his ex-wife and her lawyer. This is all about how the government of Israel uses NSO's phone hacking tools. And that use appears to be, in a word, extremely irresponsible.
|
![]() |
by Karl Bode on (#5VQEJ)
Media and telecom giants have been desperately trying to stall the nomination of Gigi Sohn to the FCC. Both desperately want to keep the Biden FCC gridlocked at 2-2 Commissioners thanks to the rushed late 2020 Trump appointment of Nathan Simington to the Commission. Both industries most assuredly don't want the Biden FCC to do popular things like restore the FCC's consumer protection authority, net neutrality, or media consolidation rules. But because Sohn is so popular, they've had a hell of a time coming up with any criticisms that make any coherent sense.One desperate claim being spoon fed to GOP lawmakers is that Sohn wants to "censor conservatives," despite the opposite being true: Sohn has considerable support from conservatives for protecting speech and fostering competition and diversity in media (even if she disagrees with them). Another lobbying talking point being circulated is that because Sohn briefly served on the board of the now defunct Locast, she's somehow incapable of regulating things like retransmission disputes objectively. Despite the claim being a stretch, Sohn has agreed to recuse herself from such issues for the first three years of her term.Hoping to seize on the opportunity, former FCC boss turned top cable lobbyist Mike Powell is now trying to claim that because Sohn has experience working on consumer protection issues at both Public Knowledge and the FCC (she helped craft net neutrality rules under Tom Wheeler), she should also be recused from anything having to do with telecom companies. It's a dumb Hail Mary from a revolving door lobbyist whose only interest is in preventing competent oversight of clients like Comcast:
|
![]() |
by Mike Masnick on (#5VQCF)
For decades here on Techdirt I've argued that competition is the biggest driver of innovation, and so I'm very interested in policies designed to drive more competition. Historically this has been antitrust policy, but over the past decade or so it feels like antitrust policy has become less and less about competition, and more and more about punishing companies that politicians dislike. We can debate whether or not consumer welfare is the right standard for antitrust -- I think there are people on both sides of that debate who make valid points -- but I have significant concerns about any antitrust policy that seems deliberately designed to make consumers worse off.That's why I'm really perplexed by the push recently to push through the “American Innovation and Choice Online Act” from Amy Klobuchar which, for the most part, doesn't seem to be about increasing competition, innovation, or choice. It seems almost entirely punitive in not just punishing the very small number of companies it targets, but rather everyone who uses those platforms.There's not much I agree with Michael Bloomberg about, but I think his recent opinion piece on the AICOA bill is exactly correct.
|
![]() |
by Tim Cushing on (#5VQ7Y)
Way back in 2014, Oklahoma state senator (and former police officer) Al McAffrey had an idea: what if cops could issue traffic tickets electronically, without ever having to leave the safety and comfort of their patrol cars?The idea behind it was officer safety. This would keep officers from standing exposed on open roads and/or interacting face-to-face with a possibly dangerous driver. The public's safety was apparently low on the priority list, since this lack of interaction could permit impaired drivers to continue driving or allow actually dangerous people to drive away from a moving violation to do more dangerous things elsewhere.It also would allow law enforcement agencies to convert drivers to cash more efficiently by speeding up the process and limiting things that might slow down the revenue stream, like having actual conversations with drivers. On the more positive side, it would also have lowered the chance of a traffic stop turning deadly (either for the officer or the driver) by limiting personal interactions that might result in the deployment of excessive or deadly force. And it also would limit the number of pretextual stops by preventing officers from claiming to have smelled something illegal while conducting the stop.Up to now, this has only been speculative legislation. But it's becoming a reality, thanks to government contractor Trusted Driver. Run by former police officer Val Garcia, the program operates much like the TSA's Trusted Traveler program. Users create accounts and enter personal info and then receive traffic citations via text messages.The program is debuting in Texas, where drivers who opt in will start being texted by cops when they've violated the law.
|
![]() |
by Daily Deal on (#5VQ7Z)
The 2022 FullStack Web Developer Bundle has 11 courses to help you step up your game as a developer. You'll learn frontend and backend web technologies like HTML, CSS, JavaScript, MySQL, and PHP. You'll also learn how to use Git and GitHub, Vuex, Docker, Ramda, and more. The bundle is on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5VQ5R)
Whatever the (I'd argue unfortunate) politics behind Stephen Breyer's decision to retire as a Supreme Court Justice at the conclusion of this term, it is notable around here for his views on copyright. Breyer has generally been seen as the one Justice on the court most open to the idea that overly aggressive copyright policy was dangerous and potentially unconstitutional. Perhaps ironically, given that they are often lumped together on the overly simplistic "left/right" spectrum -- Justices Breyer and Ginsburg -- presented somewhat opposite ends of the copyright spectrum. Ginsburg consistently was a voice in favor of expanding copyright law to extreme degrees, while Breyer seemed much more willing to recognize that the rights of users -- including fair use -- were extremely important.If you want to see that clearly, read Ginsburg's majority opinion in the Eldred case (on whether or not copyright term extension is constitutional) as compared to Breyer's dissent. To this day I believe that 21st century copyright law would have been so much more reasonable and so much more for the benefit of the public if Breyer had been able to convince others on the court to his views. As Breyer notes in his dissent, a copyright law that does not benefit the public should not be able to survive constitutional scrutiny:
|
![]() |
by Karl Bode on (#5VPTH)
Back in 2015, frustration at John Deere's draconian tractor DRM helped birth a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair (which for many owners involved hauling tractors hundreds of miles and shelling out thousands of additional dollars), or toying around with pirated firmware just to ensure the products they owned actually worked.Seven years later and this movement is only growing. This week Senator Jon Tester said he was introducing new legislation (full text here, pdf) that would require tractor and other agricultural hardware manufacturers to make manuals, spare parts, and and software access codes publicly available:
|
![]() |
by Timothy Geigner on (#5VPB7)
Hopefully, you will recall our discussion about one YouTuber, Totally Not Mark, suddenly getting flooded with 150 copyright claims on his YouTube channel all at once from Toei Animation. Mark's channel is essentially a series of videos that discuss, critique, and review anime. Toei Animation produces anime, including the popular Dragon Ball series. While notable YouTuber PewDiePie weighed in with some heavy criticism over how YouTube protects its community in general from copyright claims, the real problem here was one of location. Matt is in Ireland, while Toei Animation is based out of Japan. Japan has terrible copyright laws when it comes to anything resembling fair use, whereas Ireland is governed by fair dealing laws. In other words, Matt's use was just fine in Ireland, where he lives, but would not be permitted in Japan. Since YouTube is a global site, takedowns have traditionally been global.Well, Matt has updated the world to note that he was victorious in getting his videos restored and cleared, with a YouTube rep working directly with him on this.
|
![]() |
by Leigh Beadon on (#5VP8C)
Last night at midnight, we reached the end of Gaming Like It's 1926, our fourth annual public domain game jam celebrating the new works that entered the public domain this year. At final count, we got 31 entries representing a huge variety of different kinds of digital and analog games!For the next couple of weeks, we'll be digging into all the games and selecting the winners in our six categories — but there's no need to wait before playing! You can check out all the entries on itch.io:At first glance (and having poked around in a couple of the early entries) I can already tell it's going to be tough to narrow these down to just six winners — there are lots of games here that do fun and interesting things with public domain works. As in past years, once we've selected and announced the winners we'll discuss each one in detail in a podcast and a series of posts.Until then, a huge thanks to everyone who participated this year, and also to everyone who takes some time to play the games and give these designers the attention they deserve!
|
![]() |
by Tim Cushing on (#5VP5D)
Somewhere between the calls to end encryption and calls to do literally anything about crime rate spikes at this time of year, at this time of day, in [insert part of the country], localized entirely within [add geofence] lies the reality of law enforcement. While many continue to loudly decry the advent of by-default encryption, the reality of the situation is people are generating more data and content than ever. And most of it is less than a warrant away.While certain suspect individuals continue to proclaim encryption will result in an apocalypse of criminal activity, others are reaping the benefits of always-on internet interactivity. Clearview, for example, has compiled a database of 10 billion images by doing nothing more than scraping the web, grabbing everything that's been made public by an extremely online world population.You want facial images free of charge and no Fourth Amendment strings attached? You need look no further than the open web, which has all the faces you want and almost none of the attendant restrictions. "Going dark" is for chumps who don't know how to leverage the public's willingness to share almost anything with the rest of the internet.The Chicago PD knows who's keeping the internet bread buttered and which side they're on. A report from Business Insider (written by Caroline Haskins) highlights an internal CPD presentation that makes it explicit cops have gained plenty from the rise of social media platforms, easily outweighing the subjective losses end-to-end encryption may have recently created.
|
![]() |
by Leigh Beadon on (#5VP8D)
As many of you know, last week we hosted an online event for the latest Techdirt Greenhouse edition, all about looking back on the lessons learned from the 2012 protests against SOPA and PIPA. Our special guest was Rep. Zoe Lofgren, one of the strongest voices in congress speaking out against the disastrous bills, who provided all kinds of excellent insight into what happened then and what's happening now. In case you missed it, for this week's episode of the podcast (yes, we're finally back with new episodes!) we've got the full conversation and Q&A from the event.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Leigh Beadon on (#5VP27)
As many of you know, last week we hosted an online event for the latest Techdirt Greenhouse edition, all about looking back on the lessons learned from the 2012 protests against SOPA and PIPA. Our special guest was Rep. Zoe Lofgren, one of the strongest voices in congress speaking out against the disastrous bills, who provided all kinds of excellent insight into what happened then and what's happening now. In case you missed it, for this week's episode of the podcast (yes, we're finally back with new episodes!) we've got the full conversation and Q&A from the event.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Mike Masnick on (#5VNY7)
I already wrote a long post earlier about the very very real problems with the EARN IT Act -- namely that it would make the problem of child sexual abuse material significantly worse by repeating the failed FOSTA playbook, and that it would attack encryption by making it potential "evidence" in a case against a tech company for any CSAM on its site. But with the bill, the sponsors of the bill, Senators Richard Blumenthal and Lindsey Graham, released a "Myth v. Fact" document to try to counter the criticisms of EARN IT. Unfortunately, the document presents an awful lot of "myths" as "facts." And that's a real problem.The document starts out noting, correctly:
|
![]() |
by Mike Masnick on (#5VNVM)
You may recall the terrible and dangerous EARN IT Act from two years ago, which was a push by Senators Richard Blumenthal and Lindsey Graham to chip away more at Section 230 and to blame tech companies for child sexual abuse material (CSAM). When it was initially introduced, many people noticed that it would undermine both encryption and Section 230 in a single bill. While the supporters of the bill insisted that it wouldn't undermine encryption, the nature of the bill clearly set things up so that you either needed to encrypt everything or to spy on everything. Eventually, the Senators were persuaded to adopt an amendment from Senator Patrick Leahy to more explicitly attempt to exempt encryption from the bill, but it was done in a pretty weak manner. That said, the bill still died.But, as with 2020, 2022 is an election year, and in an election year some politicians just really want to get their name in headlines about how they're "protecting the children," and Senator Richard Blumenthal loves the fake "protecting the children" limelight more than most other Senators. And thus he has reintroduced the EARN IT Act, claiming (falsely) that it will somehow "hold tech companies responsible for their complicity in sexual abuse and exploitation of children." This is false. It will actually make it more difficult to stop child sexual abuse, but we'll get there. You can read the bill text here, and note that it is nearly identical to the version that came out of the 2020 markup process with the Leahy Amendment, with a few very minor tweaks. The bill has a lot of big name Senators as co-sponsors, and that's from both parties, suggesting that this bill has a very real chance of becoming law. And that would be dangerous.If you want to know just how bad the bill is, I found out about the re-introduction of the bill -- before it was announced anywhere else -- via a press release sent to me by NCOSE, formerly "morality in media," the busybody organization of prudes who believe that all pornography should be banned. NCOSE was also a driving force behind FOSTA -- the dangerous law with many similarities to EARN IT that (as we predicted) did nothing to stop sex trafficking, and plenty of things to increase the problem of sex trafficking, while putting women in danger and making it more difficult for the police to actually stop trafficking.Amusingly (?!?) NCOSE's press release tells me both that without EARN IT tech platforms "have no incentive to prevent" CSAM, and that in 2019 tech platforms reported 70 million CSAM images to NCMEC. They use the former to insist that the law is needed, and the latter to suggest that the problem is obviously out of control -- apparently missing the fact that the latter actually shows how the platforms are doing everything they can to stop CSAM on their platforms (and others!) by following existing laws and reporting it to NCMEC where it can be put into a hash database and shared and blocked elsewhere.But facts are not what's important here. Emotions, headlines, and votes in November are.Speaking of the lack of facts necessary, with the bill, they also have a "myth v. fact" sheet which is just chock full of misleading and simply incorrect nonsense. I'll break that down in a separate post, but just as one key example, the document really leans heavily on the fact that Amazon sends a lot fewer reports of CSAM to NCMEC than Facebook does. But, if you think for more than 3 seconds about it (and aren't just grandstanding for headlines) you might notice that Facebook is a social media site and Amazon is not. It's comparing two totally different types of services.However, for this post I want to focus on the key problems of EARN IT. In the very original version of EARN IT, the bill created a committee to study if exempting CSAM from Section 230 would help stop CSAM. Then it shifted to the same form it's in now where the committee still exists, but they skip the part where the committee has to determine if chipping away at 230 will help, and just includes that as a key part of the bill. The 230 part mimics FOSTA (again which has completely failed to do what it claimed and has made the actual problems worse), in that it adds a new exemption to Section 230 that exempts any CSAM from Section 230.EARN IT will make the CSAM problem much, much worse.At least in the FOSTA case, supporters could (incorrectly and misleadingly, as it turned out) point to Backpage as an example of a site that had been sued for trafficking and used Section 230 to block the lawsuit. But here... there's nothing. There really aren't examples of websites using Section 230 to try to block claims of child sexual abuse material. So it's not even clear what problem these Senators think they're solving (unless the problem is "not enough headlines during an election year about how I'm protecting the children.")The best they can say is that companies need the threat of law to report and takedown CSAM. Except, again, pretty much every major website that hosts user content already does this. This is why groups like NCOSE can trumpet "70 million CSAM images" being reported to NCMEC. Because all of the major internet companies actually do what they're supposed to do.And here's where we get into one of the many reasons this bill is so dangerous. It totally misunderstands how Section 230 works, and in doing so (as with FOSTA) it is likely to make the very real problem of CSAM worse, not better. Section 230 gives companies the flexibility to try different approaches to dealing with various content moderation challenges. It allows for greater and greater experimentation and adjustments as they learn what works -- without fear of liability for any "failure." Removing Section 230 protections does the opposite. It says if you do anything, you may face crippling legal liability. This actually makes companies less willing to do anything that involves trying to seek out, take down, and report CSAM because of the greatly increased liability that comes with admitting that there is CSAM on your platform to search for and deal with.EARN IT gets the problem exactly backwards. It disincentivizes action by companies, because the vast majority of actions will actually increase rather than decrease liability. As Eric Goldman wrote two years ago, this version of EARN IT doesn't penalize companies for CSAM, it penalizes them for (1) not magically making all CSAM disappear, for (2) knowing too much about CSAM (i.e., telling them to stop looking for it and taking it down) or (3) not exiting the industry altogether (as we saw a bunch of dating sites do post FOSTA).EARN IT is based on the extremely faulty assumption that internet companies don't care about CSAM and need more incentive to do so, rather than the real problem, which is that CSAM has always been a huge problem and stopping it requires actual law enforcement work focused on the producers of that content. But by threatening internet websites with massive liability if they make a mistake, it actually makes law enforcement's job harder, because they will be less able to actually work with law enforcement. This is not theoretical. We already saw exactly this problem with FOSTA, in which multiple law enforcement agencies have said that FOSTA made their job harder because they can no longer find the information they need to stop sex traffickers. EARN IT creates the exact same problem for CSAM.So the end result is that by misunderstanding Section 230, by misunderstanding internet company's existing willingness to fight CSAM, EARN IT will undoubtedly make the CSAM problem worse by making it more difficult for companies to track CSAM down and report it, and more difficult for law enforcement to track down an arrest those actually responsible for it. It's a very, very bad and dangerous bill -- and that's before we even get to the issue of encryption!EARN IT is still very dangerous for encryptionEARN IT supporters claim they "fixed" the threat to encryption in the original bill by using text similar to Senator Leahy's amendment to say that using encryption cannot "serve as an independent basis for liability." But, the language still puts encryption very much at risk. As we've seen, the law enforcement/political class is very quick to want to (falsely) blame encryption for CSAM. And by saying that encryption cannot serve as "an independent basis" for liability, that still leaves open the door to using it as one piece of evidence in a case under EARN IT.Indeed, one of the changes to the bill from the one in 2020 is that immediately after saying encryption can't be an independent basis for liability it adds a new section that wasn't there before that effectively walks back the encryption-protecting stuff. The new section says: "Nothing in [the part that says encryption isn't a basis for liability] shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph if the evidence is otherwise admissable." In other words, as long as anyone bringing a case under EARN IT can point to something that is not related to encryption, it can point to the use of encryption as additional evidence of liability for CSAM on the platform.Again, the end result is drastically increasing liability for the use of encryption. While no one will be able to use the encryption alone as evidence, as long as they point to one other thing -- such as a failure to find a single piece of CSAM -- then they can bring the encryption evidence back in and suggest (incorrectly) some sort of pattern or willful blindness.And this doesn't even touch on what will come out of the "committee" and its best practices recommendations, which very well might include an attack on end-to-end encryption.The end result is that (1) EARN IT is attacking a problem that doesn't exist (the use Section 230 to avoid responsibility for CSAM) (2) EARN IT will make the actual problem of CSAM worse by making it much more risky for internet companies to fight CSAM and (3) EARN IT puts encryption at risk by potentially increasing the liability risk of any company that offers encryption.It's a bad and dangerous bill and the many, many Senators supporting it for kicks and headlines should be ashamed of themselves.
|
![]() |
by Daily Deal on (#5VNVN)
The Stellar Utility Software Bundle has what you need to recover data, reinforce security, erase sensitive documents, and organize photos. It features Stellar Data Recovery Standard Windows, Ashampoo Backup Pro 15, Ashampoo WinOptimizer 19, InPixio Photo Editor v9, Nero AI Photo Tagger Manager, and BitRaser File Eraser. It is on sale for $39.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5VNSC)
Tech company ID.me has made amazing inroads with government customers over the past several months. Some of this is due to unvetted claims by the company's CEO, Blake Hall, who has asserted (without evidence) that the federal government lost $400 billion to fraudulent COVID-related claims in 2020. He also claimed (without providing evidence) that ID.me's facial recognition tech was sturdy, sound, accurate, and backstopped by human review.These claims were made after it became apparent the AI was somewhat faulty, resulting in people being locked out of their unemployment benefits in several states. This was a problem, considering ID.me was now being used by 27 states to handle dispersal of various benefits. And it was bound to get worse, if for no other reason than ID.me would be expected to handle an entire nation of beneficiaries, thanks to its contract with the IRS.The other problem is the CEO's attitude towards reported failures. He has yet to produce anything that backs up his $400 billion in fraud claim and when confronted with mass failures at state level has chosen to blame these on the actions of fraudsters, rather than people simply being denied access to benefits due to imperfect selfies.Another claim made by Hall has resulted in a walk-back by ID.me's CEO, prompted by increased scrutiny of his company's activities. First, the company's AI has never been tested by an outside party, which means any accuracy claims should be given some serious side-eye until it's been independently verified.But Hall also claimed the company wasn't using any existing databases to match faces, insinuating the company relied on 1:1 matching to verify someone's identity. But this couldn't possibly be true for all benefit seekers, who had never previously uploaded a photo to the company's servers, only to be rejected when ID.me claimed to not find a match.It's obvious the company was using 1:many matching, which carries with it a bigger potential for failure, as well as the inherent flaws of almost all facial recognition tech: the tendency to be less reliable when dealing with women and minorities.This increased outside scrutiny of ID.me has forced CEO Blake Hall to come clean. And it started with his own employees pointing out how continuing to maintain this line of "1-to-1" bullshit would come back to haunt the company. Internal chats obtained by CyberScoop show employees imploring Hall to be honest about the company's practices before his dishonesty caused it any more damage.
|
![]() |
by Karl Bode on (#5VNFP)
Another day, another privacy scandal that likely ends with nothing changing.Crisis Text Line, one of the nation's largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers... to create and market customer service software. More specifically, Crisis Text Line says it "anonymizes" some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.As we've seen in countless privacy scandals before this one, the idea that this data is "anonymized" is once again held up as some kind of get out of jail free card:
|
![]() |
by Tim Cushing on (#5VMX3)
Breathalyzers are like drug dogs and field tests: they are considered infallible right up until they're challenged in court. Once challenged, the evidence seems to indicate all of the above are basically coin tosses the government always claims to win. Good enough for a search or an arrest when only examined by an interested outsider who's been subjected to warrantless searches and possibly bogus criminal charges. But when the evidentiary standard is a little more rigorous than roadside stops, probable cause assertions seem to start falling apart.Drug dogs are only as good as their handlers. They perform probable cause tricks in exchange for praise and treats. Field drug tests turn bird poop and donut crumbs into probable cause with a little roadside swirling of $2-worth of chemicals. And breathalyzers turn regular driving into impaired driving with devices that see little in the way of calibration or routine maintenance.Courts have seldom felt compelled to argue against law enforcement expertise and training, even when said expertise/training relies on devices never calibrated or maintained, even when said devices are capable of depriving people of their freedom.Once every so often courts take notice of the weak assertions of probable cause -- ones almost entirely supported by cop tools that remain untested and unproven. Late last year, a state judge issued an order forbidding the use of breathalyzer results as evidence in impaired driving prosecutions. District court judge Robert Brennan said he had numerous concerns about the accuracy of the tests, and the oversight of testing, and the testing of test equipment by the Massachusetts Office of Alcohol Testing.
|
![]() |
by Mike Masnick on (#5VMM4)
Back in 2020, we had a post explaining that Section 230 isn't why Omegle has awful content, and getting rid of Section 230 wouldn't change that. Omegle, if you don't know, is a service that matches people, randomly, into video chats. It's basically the same thing as Chatroulette, which got super famous for a very brief period of time years ago. Both services are somewhat infamous for the unfortunately high likelihood of randomly ending up in a "chat" with some awful dude masturbating on the other side of the screen. But, still, there are a lot of people who like using it just for random chats. I have friends who are entertainers who like to use it to test out material on random people. It has a purpose. But, sure there are some awful people on the site, like many sites. And, content moderation of live video chat is quite a challenge.For reasons I don't quite understand, some people blame Section 230 for the bad people on Omegle, and there have been a few recent lawsuits that try to get around Section 230 and still hold Omegle liable for the fact that bad people use the site. As others have explained in great detail, if these lawsuits succeed, they would do tremendous harm to online speech. We've discussed all the reasons why in the past -- but pinning liability on an intermediary for speech of its users is the best way to stifle all sorts of important speech online.So, it's good news to see that one of the first such cases against Omegle was recently dismissed on Section 230 grounds -- and rather easily at that (story first noted by Eric Goldman). The case involved a situation which is, quite clearly, terrible. It involved what's apparently known as "a capper." As explained in the ruling:
|
![]() |
by Tim Cushing on (#5VMJQ)
The latest disturbing revelation about Israeli malware merchant NSO Group is a bit delayed. NSO has claimed its malware can't be used to target American phone numbers which, even if true, hasn't stopped the malware from targeting Americans.But two years before NSO's malware malfeasance made headlines around the world, the company was inside the United States, demonstrating its products for federal law enforcement. The latest revelations come via Roman Bergman and Mark Mazzetti, writing for the New York Times.
|
![]() |
by Mike Masnick on (#5VMDQ)
I know that people who identify tribally as Democrats or Republicans often like to accuse the other team of being especially censorial, but the unfortunate fact is that elected officials in both parties seem equally interested in using the power of the state to take away 1st Amendment rights. For every misguided effort by Florida, Texas, or Georgia to attack the 1st Amendment rights of websites, we see a Colorado or New York going in the other direction.For every Republican bill in Congress demanding censorship of some types of content, you have Democrats seeking to censor other kinds of content. You can argue that the reasons behind one side's wish to censor is more pure than the other side's, but that's not how the 1st Amendment works -- and anyone who doesn't realize how any of these laws would be easily (and widely) abused by the other side has not paid any attention to the history of how speech suppressive laws work.Entering into this fray, we have Washington state Governor Jay Inslee, who, at the beginning of the year, announced plans for a law to criminalize "false speech" about elections.
|
![]() |
by Daily Deal on (#5VMDR)
This all-in-one PDF Converter Pro software enables you to convert PDF documents into a variety of formats or processes and create PDF files from other formats in just a few clicks. The high quality output is ensured as all the original layouts, images, texts, hyperlinks, etc. will be preserved without any quality loss. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5VMBJ)
A report published by Google's transparency team last August made it clear reverse warrants weren't a law enforcement fad, but rather a trend. Google is the recipient of pretty much every so-called reverse (or geofence) warrant issued, thanks to its vast stores of location info. When cops have a crime but no likely suspect, they have the option of turning everyone with a cell phone in the area into a suspect and working their way backwards from this list of data to find the most likely suspects.Google's report showed an exponential escalation in geofence warrant deployments.
|
![]() |
by Karl Bode on (#5VM3D)
For several years the wireless industry has been hyping fifth-generation wireless (5G) as something utterly transformative. For this whole stretch we've been subjected to claims about how the wireless standard would revolutionize smart cities, transform the way we live, result in unbridled innovation, and even help us cure cancer (doctors have told me it won't actually do that, if you're interested).But in reality, when 5G arrived, it was a bit underwhelming. At least in the United States, where speeds were dramatically lower than overseas deployments due to our failure to make middle-band spectrum widely available. And at prices that remain some of the highest in the developed world thanks in large part to consistent consolidation and regulatory capture.Yeah, 5G is important. But not in any sexy way. It provides significantly faster speeds and lower latency over more reliable networks. Which is a good thing. But it's more evolution than revolution. Consumers are generally happy with 4G speeds, and most consumer surveys suggest the number one thing they want is better coverage (which U.S. 5G has struggled to provide because middle band spectrum was scarce) and price cuts.Hoping to excite consumers and regulators, wireless carriers have been desperate to come up with marketing that tries to frame 5G as utterly transformative. Usually this involves marketing that takes something you can already do over 4G or Wi-Fi, attaching 5G to it, and calling it a miracle. Like watching concerts (which you can already do) over 5G. Or getting a tattoo remotely (which you could technically already do over wired, Wi-Fi, or 4G broadband):While 5G hype had slowed a bit in the last six months, the wireless industry jumped back into the fray with a sponsored report claiming that 5G will soon dramatically aid the fight against climate change. The industry study (which was quickly picked up and parroted by loyal telecom trade magazines) insists that 5G will quickly help the U.S. meet its climate goals (which most climate experts say were already woefully undercooked):
|
![]() |
by Leigh Beadon on (#5VKB7)
This week, our first place winner on the insightful side is That One Guy with a comment about the police and their hysterical messaging about the supposed on-the-job dangers of coming anywhere near fentanyl:
|
![]() |
by Leigh Beadon on (#5VJJX)
Five Years AgoThis week in 2017, outgoing FCC boss Tom Wheeler had a message for Trump supporters about the benefits of net neutrality, while cable's congressional allies were lining up to urge Ajit Pai to kill the cable box competition plan. Trump was muzzling federal employees and seeking to trademark "Make America Great Again". Virginia was pushing a protectionist law preventing better broadband, garnering opposition from internet companies. Meanwhile, a judge allowed the lawsuit over PACER fees to continue as a class action, Perfect 10 suffered another loss in court that set more good copyright precedent, and in unfortunate news, a state appeals court said unlocking a phone with a fingerprint doesn't violate the Fifth Amendment.Ten Years AgoThis week in 2012, many on the internet were celebrating the victory over SOPA, getting bolder in calling out the MPAA for lying and opposing Chris Dodd — while SOPA supporters were busy whining, offering fake olive branches, and making up threats. But much attention was also already turning to new issues: the Megaupload shutdown which was causing other companies to turn off useful services (and leading one astroturf group to embarrass itself with a late press release claiming SOPA was necessary to shut Megaupload down), and the ACTA agreement, which was getting the SOPA treatment in Poland with huge crowds on the street and politicians donning Guy Fawkes masks.Fifteen Years AgoIn 2007, the most controversial thing about the MPAA in the eyes of the average person was their movie rating system — and this week they finally agreed to make some small changes to it. The RIAA was telling the CEA to do the impossible and stop making them look evil, while record labels were talking up the idea of getting paid for giving consumers rights they already have. Blu-ray's DRM was cracked (even the creators admitted it) while Apple's DRM was facing legal issues in Norway. And Fox's "piracy czar" was subpoenaing YouTube to find out who was uploading episodes of 24 and The Simpsons.Also, we got an extremely important Section 230 ruling in a case against Yahoo.
|
![]() |
by Timothy Geigner on (#5VHZZ)
As you might expect, Nike often finds itself involved in intellectual property stories. To be fair, the company has been on both sides of the IP coin. There are plenty of stories of Nike playing IP bully: the whole Satan Shoes dustup with MSCHF, its lawsuit happy practice when it comes to counterfeits, and so on. But the company has also found itself on the receiving end of IP action, sometimes very much deserved, sometimes not so much.Among the company's most guarded IP is the trademark the company has on its famous motto: JUST DO IT. Nike has gone after companies, typically during the trademark application process, whenever there is an attempt to trademark a "Just [word] it" phrase. Most of that action has centered around apparel or athletic companies. But now, a business that produces succulent plant arrangements largely advertised on TikTok has found its trademark application for "JustSuccIt" opposed by Nike.
|
![]() |
by Leigh Beadon on (#5VHXS)
Gaming Like It's 1926: The Public Domain Game JamBy now, you've probably heard about Gaming Like It's 1926, our fourth annual public domain game jam celebrating the new works that entered the public domain this year. The clock is ticking on the jam, but there's still time — entries are due by January 31st, which means you've got the weekend to put something together if you sign up now and get started!(If you need some ideas on how to make a game quickly, check out Story Synth, created by our partner in running these game jams, Randy Lubin.)The jam is open to both digital and analog games (be sure to read over the full requirements on the jam page). There are lots of interesting works entering the public domain this year, including:
|
![]() |
by Nicholas Anthony on (#5VHWP)
It took nearly a year, but the Federal Reserve has finally released its report on central bank digital currencies (CBDCs). The report fails to live up to the Fed's hype. If anything, it shows a CBDC is a solution in search of a problem.The 40-page report contains so little information, it makes you wonder what the Fed has been working on for all this time. To be fair, the report does offer an idea of how the Fed envisions a CBDC taking shape, but their vision is a bad one. The Fed may have finally made good on its promise to deliver a report, but it has a long road ahead if it intends to deliver a CBDC.The Fed's desired approach is for a CBDC that would be "privacy-protected, intermediated, widely transferable, and identify-verified." That might sound good at first glance, but a closer look reveals that this approach is really quite unfortunate.Protecting the privacy of the American people has been one of the greatest concerns around the design of a CBDC. So it makes sense that privacy was listed first. However, at the opposite end of the list, the Fed hedges on that promise with just two words: "identity verified." Essentially, this means the Fed has abandoned the idea of crafting a CBDC that would act as a digital form of cash. It means people will need to have their identities verified before using the CBDC so that the Fed can keep a record of their transactions. Where cash offers Americans the freedom to make financial decisions in private -- a freedom that should be protected by the Fourth Amendment -- the Fed's CBDC would likely be another avenue for information collection.What's more, it's unclear what real benefits the Fed's CBDC would have for consumers. In the report, the Fed stated that a CBDC could improve the speed of payments, financial inclusion, and the dollar's international status. But those are all areas that are being fixed through other endeavors -- endeavors that will likely be completed before a CBDC reaches the market.For example, both the private and public sectors have been developing networks to speed up payments. For financial inclusion, survey data from the FDIC has found that the number of unbanked households decreases every year as technology makes banking more accessible. That rate of improvement will likely only increase as private sector initiatives to help the unbanked (e.g., BankOn) continue to get off the ground. Finally, every positive step for the dollar will improve its international status. A CBDC might help the United States keep up with the Joneses, but it's not unique in its ability to improve the dollar's status. More so, it is highly unlikely that a "CBDC" is a necessary requirement to compete on the world's stage. People are not going to flock to the Chinese yuan or the Nigerian naira simply because they've "gone digital."A CBDC may be an exciting prospect for central banks, but the Fed is going to need a much more robust set of benefits if it is going to justify experimenting with the money in people's wallets.Just before the report's release, Fed Chair Jerome Powell wrote to Senator Toomey (R-PA) saying, "One critical question is whether a CBDC would yield benefits more effectively than alternative methods." By all accounts, it seems the answer to that question is no. Both the Fed and Congress will have a long road ahead if either one intends to justify the supposed need for a CBDC to the American people.Nicholas Anthony is the Manager of the Cato Institute's Center for Monetary and Financial Alternatives and a contributor with Young Voices.
|
![]() |
by Mike Masnick on (#5VHR1)
WeChat is the massively dominant Chinese social media app (plus commerce, plus a lot more), but unlike other apps from China, like TikTok, it has mostly focused on the Chinese market, rather than markets overseas. Nonetheless, it has apparently huge popularity in Australia (which has a large Chinese ex-pat community). As it grew more popular, it's no surprising that Australian politicians began using the service -- even though in order to sign up for an account, you're supposed to be a Chinese citizen. Still, politicians such as Prime Minister Scott Morrison signed up for an account raising some concerns domestically -- though they were mostly dismissed by Morrision and his allies. This was true even after WeChat took down a post by Morrison that criticized a Chinese official.Of course, things got a lot more interesting when Morrison's WeChat account recently... was somehow taken over by a new account called "Australian-Chinese New Life" and locked out Morrison and his staff from using the account. The new account posted:
|
![]() |
by Karl Bode on (#5VHP3)
The 9th Circuit Court of Appeals has put a final bullet in the telecom industry's attempt to kill state-level net neutrality laws. The ruling (pdf) again makes it clear that the Trump FCC's 2017 repeal of net neutrality didn't follow the law when they also attempted to ban states from protecting broadband consumers in the wake of federal apathy. Basically, the courts keep making it clear the FCC can't abdicate its net neutrality and consumer protection authority under the Communications Act, then turn around and tell states what they can or can't do on consumer protection:
|
![]() |
by Mike Masnick on (#5VHHM)
Having seen both Florida and Texas have their "you can't moderate!" social media laws tossed out as unconstitutional (wasting a ton of taxpayer money in the process) you might think that other state legislatures would maybe pump the brakes on trying the same thing. No such luck. There are efforts underway in a bunch of states to pass similarly unconstitutional laws, including Utah, Indiana, Wisconsin, Ohio (not to mention states like New York pushing in the opposite extreme of requiring moderation). The latest to enter the fray is Georgia with its Common Carrier Non-Discrimination Act, with an astounding 24 ignorant co-sponsors who apparently hate the 1st Amendment.The law is dead on arrival for a wide variety of reasons, but as you might have guessed from the name seeks to just randomly declare social media (and only social media) as "common carriers" and saying they can't "discriminate" (and by "discriminate" they mean, "take down content from Nazis.") The "declarations" on this bill are nonsense disconnected from reality.
|
![]() |
by Daily Deal on (#5VHHN)
Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about the Complete Video Production Super Bundle. Video content is fast changing from the future marketing tool to the present, and in these 10 courses you'll learn how to make professional videos on any budget. From the absolute basics to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high-quality video content and driving viewers to it in no time. This bundle will teach you how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5VHF8)
The military has an obvious need for secure communications. It offered its support of encryption even as the NSA tried to find ways to undercut to make its surveillance ends easier to achieve.The problem is the military doesn't have a great plan for securing communications between personnel. Due to tech limitations the Defense Department has yet to overcome (despite billions in annual funding), soldiers are turning to third-party messaging services to communicate orders and disseminate information.
|
![]() |
by Karl Bode on (#5VH75)
While there's been no shortage of dumb and frustrating tech policy debates in recent years, one of the more positive shifts has been watching the "right to repair" movement shift from the fringe to massively mainstream. Once just the concern of pissed off farmers and nerdy tinkerers, the last two years have seen a groundswell of broader culture awareness about the perils of letting companies like Apple, John Deere, Microsoft, or Sony monopolize repair. And the dumb lengths most of these companies have gone to make repairing things you own both more difficult and way more expensive.Things shifted greatly last July when President Biden formally included some right to repair measures in a broad executive order demanding the FTC craft stricter rules targeting efforts to hamstring independent and consumer repair options. This week the president gave the subject another mainstream boost with statements before the White House Competition Council lauding right to repair (as well as a tweet):
|
![]() |
by Mike Masnick on (#5VH01)
For a while now, the EU has been working on its latest big update to internet regulations, mostly under the umbrella of the Digital Services Act (DSA). Multiple people who have been following the process there have noted how much more thoughtful the process has been for the DSA as compared to internet regulatory attempts in the US, which seem to mostly be driven by which senator thinks they can get the biggest headlines for misrepresenting which particular outrage this week. A more careful, thoughtful approach is definitely appreciated, but that doesn't mean the results will be any good. Last week, the EU Parliament approved the latest version of the DSA in what has been seen as something of a mixed bag.Pirate Party MEP Patrick Breyer described the final vote as having both "huge success and major setbacks." I'm actually a bit surprised that the EFF seems mostly happy with the result (with a few caveats), though that seems to mainly be because a few really bad ideas didn't make the cut. But, it still seems like an awful lot of bad ideas did make it through.The good parts are that the new DSA mostly retains the E-Commerce Directive's "conditional liability regime" and rejected a proposal that would require "general monitoring" (i.e., faulty filters to try to screen "bad stuff"). There was an attempt to go even further and ban upload filters entirely, but that was rejected. Similarly a proposal to say that courts could not require ISPs engage in full site blocking was rejected.On the good side, this version of the DSA includes a right to pay for digital services anonymously, though it rejected a limitation on requiring a court order for government's to snoop through your data. It also rejected a proposal that would require a court order to remove content -- banning the practice of enabling government agencies to order content removals. This is extremely unfortunate, and an attack on due process.There's a lot more in there that's a mix of good and bad, and the whole thing isn't truly final yet either. But, I still think that overall the DSA will have a hugely negative impact on internet freedoms and free speech, even if it got some small things at the margin right.In the end, I do think that any big "sweeping" set of internet regulations -- whether prepared thoughtfully or not -- are always going to be a disaster. They can't take into account how complex the world is, can't take into account context, and can't take into account the general dynamism of the internet -- and how quickly things change. Not only that, but just the very process of opening up such sweeping regulations that cover so much of how the internet works for users is going to get hijacked by special interests who want this or that thing included in the final regulation.Is the process more reality-based than the US's grandstand-o-rama? Sure. Will the end results be any better? Doesn't seem like it.
|
![]() |
by Tim Cushing on (#5VGNN)
For three decades, the DOJ and FBI have barely tried (and always failed) to collect information about use of force by the nation's 18,000 law enforcement agencies. Despite occasional promises to be more thorough and do better, the FBI has, for the most part, done nothing with this opportunity -- one thrust upon it by a crime bill passed in 1994.The biggest problem is that submission of use of force data has always been voluntary. The Department of Justice only directly oversees the FBI. Neither entity can force local agencies to provide this data. These multiple levels of failure have led to the Government Accountability Office suggesting the national use of force database be put out of its useless misery as early as this year, rather than just be another thing tax dollars are wasted on.Local lawmakers could at least compel uniform collection and reporting of this data. They may not be able to mandate the release of this data to federal agencies, but they could at least ensure proper reporting occurs at the local level.Mandates like this are needed. But few localities have them. This sort of accountability must be forced on local agencies. Collecting information on use of force incidents and any attendant complaints or allegations of excessive force does nothing for law enforcement agencies. So, the data collections must be compelled because there's nothing innately compelling about collecting data that may show officers and agencies have unaddressed problems.The lack of accountability means any collections are hit and miss. And that data set is mostly misses. Unsurprisingly, when journalists go looking for this data in hopes of quantifying local law enforcement's generation of (and response to) citizen complaints, they come away with incomplete depictions of patterns and practices. That's the best case scenario. The worst case is journalists discovering agencies aren't compiling this data at all.What's been uncovered in Maine could likely be said about almost any other state in the Union.
|
![]() |
by Timothy Geigner on (#5VGGP)
At this point, posts about Nintendo getting fan-made games or content removed from the internet over IP concerns are evergreen. Nobody should be surprised by this shit any more, though you should still be either very angry about it, or at least disappointed. The company is almost a caricature of an IP maximalist company: anything and everything that even comes close to touching its IP gets thrown at the company lawyers to deal with. It's bad enough to be parodied by the general public. This is where I remind you that companies like Nintendo have a wide spectrum of avenues for responding to fanworks. Depending on the IP in question, the company could do any of the following besides going legal: let fans have their fun, issue zero-dollar or cheap licenses to fans to legitimize their work, or incorporate fanworks into official releases by either licensing or employing these fans. Plenty of other companies have taken these routes, or others, and have survived just fine. Nintendo never does this.And so, here we are again with Nintendo getting footage of an unreleased fan-game disappeared from the internet, citing copyright. In this instance, one fan made a first person shooter game in the Unreal Engine so you can go hunting Pokémon as violently as possible.
|
![]() |
by Karl Bode on (#5VGCW)
We've noted a few times that Elon Musk's Starlink satellite broadband service is going to have a hard time meeting expectations. One, while the service is often sold as a near-magical cure for the estimated 20-42 million Americans without broadband access, it only has the capacity to serve somewhere between 500,000 and 800,000 users. Due to additional supply chain issues, only about 150,000 users have received access so far. And those who've paid the company $100 to wait in line say the company is incapable of giving them any kind of timeline of when they can expect service.Last fall, reports emerged showing how many of these users had been waiting months for any update whatsoever on the progress of their orders, and that Starlink customer service was utterly nonexistent. Nearly four months later and another report indicates that things haven't seemingly improved much. Customers who've been waiting a year for service say they've seen complete and total radio silence from the company:
|
![]() |
by Tim Cushing on (#5VGAP)
Small towns strapped for cash sometimes decide to use their law enforcement agencies to generate a steadily increasing revenue stream. Towns that otherwise would never have been noticed by non-residents have achieved national notoriety by unofficially rebranding as Speed Trap, USA.Sometimes this notoriety leads to punishment by other government agencies. A small town in Oklahoma was banned from enforcing traffic laws by the state's Department of Safety after it came to light the town of 410 people was employing six police officers to haul in nearly $500,000 in fees in a single year -- 76% of the town's revenue.Another small town is generating some national press about its abusive traffic enforcement operations. Brookside, Alabama has only 1,253 residents. But it has nine police officers, two drug dogs (including one named "Cash"), a mine-resistant SWAT vehicle obtained through the Defense Department's 1033 program, and an unquenchable thirst for traffic enforcement revenue.In the last couple of years -- under Chief Mike Jones (who was hired in 2018 and was then the town's only sworn officer) -- Brookside's revenue has increased exponentially. Update: Following this controversy, Jones announced his resignation.
|
![]() |
by Mike Masnick on (#5VG8H)
It's been just over 17 years since I coined the phrase "The Streisand Effect," which has totally taken on a life of its own. A key reason for naming it was to hopefully wake up overly aggressive lawyers to the fact that sending a nasty, threatening cease and desist letters to try to suppress information or stop someone from doing something wasn't a good idea. A few years later, a lawyer friend of mine mentioned that he thought that the concept of The Streisand Effect had done its job -- and that many, many corporate lawyers were much more averse to sending out such aggressive letters, recognizing that there might be a better approach. However, I still find it's pretty typical for many lawyers to immediately go for the the nasty threat letter, so it seemed like perhaps the lawyers hadn't quite gotten the message.So... it's kind of a pleasant surprise to see how at least one large company -- and possibly a bunch of large companies -- handled the recent "drop" from the merry pranksters at MSCHF (who are no strangers to legal controversies). The new drop is the C&D Grand Prix, in which MSCHF was selling racecar-style shirts emblazoned with corporate logos from some of the biggest (and most legally aggressive in protecting their trademarks) brands out there:People could buy each shirt, and then there was a special prize: anyone who bought the shirt of the "winner" of the Grand Prix would also get a MSCHF Grand Prix champion's hat. How could a company "win"? By being the first company, whose logoed shirt was for sale, to send MSCHF a cease and desist.What's funny is that we had considered a remarkably similar idea at Techdirt many years ago (though a bit more focused on highlighting some absurdities of trademark law), but decided not to do it because we don't have a giant bank account like MSCHF does.Of course, there were some big questions about how would the various companies respond, and, incredibly, none of the companies (as far as we can tell) actually freaked out about this and went ballistic. It's possible such letters are still coming. However, at least one of the companies prodded by the Grand Prix, Subway, actually took this all in good spirit. Very soon after MSCHF launched the Grand Prix, it tweeted "Two can play this game. Who's interested in this bad boy?" and posted an image of a Subway colored shirt, but with MSCHF's logo on the front:
|
![]() |
by Daily Deal on (#5VG8J)
The Ultimate Programming Bundle has three courses to help you learn the basics. The courses cover Python, JavaScript, and databases. The bundle is on sale for $21.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5VG38)
In the wake of a tragedy, it's human nature to seek some form of justice or closure. The feeling is that someone should be held accountable for a senseless death, even when there's no one to blame directly. This tends to result in misguided lawsuits, like the multiple suits filed by (far too opportunistic) law firms that seek to hold social media platforms accountable for the actions of mass shooters and terrorists.The desire to respond with litigation remains even when there's a single victim -- one who has taken their own life. That's the case here in this lawsuit, coming to us via Courthouse News Service. Plaintiff Tammy Rodriguez's eleven-year-old daughter committed suicide. Her daughter was allegedly a heavy user of both Snapchat and Instagram. The connection between the platforms and her daughter's suicide is alluded to and alleged, but nothing in the lawsuit [PDF] shows how either of the companies are directly responsible for the suicide.Here's how the complaint hopes to achieve these questionable ends:
|
![]() |
by Karl Bode on (#5VFR7)
In late 2020, Massachusetts lawmakers (with overwhelming public support) passed an expansion of the state's "right to repair" law. The original law was the first in the nation to be passed in 2013. The update dramatically improved it, requiring that as of this year, all new telematics-equipped vehicles be accessible via a standardized, transparent platform that allows owners and third-party repair shops to access vehicle data via a mobile device. The goal: reduce repair monopolies, and make it cheaper and easier to get your vehicle repaired.Of course major auto manufacturers didn't like this, so they set about trying to demonize the law with false claims and a $26 million ad campaign, including one ad falsely claiming the expansion would help sexual predators. Once the law passed (again, with the overwhelming support of voters) automakers sued to stop it, which has delayed its implementation. That same coalition of automakers (GM, Ford, Honda, Hyundai) are pushing new legislation that would delay implementation even further -- to 2025:
|
![]() |
by Tim Cushing on (#5VF9X)
The longer we live, the more we become accustomed to cop fiction.We live and let (our rights) die when cops swear in courts they smelled jazz cigarettes while engaging in a pretextual stop. Who can challenge that? A cop says he smelled weed. The defendant says no he didn't. Who's more believable? The cop with the nose or the person accused of multiple felonies?If cops need an assist, they can always call in another witness that can't be cross-examined: a drug dog. The dog "alerted" -- something that means a breed domesticated to please did nothing more than please its handlers. Courts will, again, often grant deference to "testimony" that can't be challenged.The drug warriors of the USA are always in search of the next useful fiction -- something that can be written down on reports, delivered in statements in the press, but never objectively examined by a court of law. That new fiction involves the latest public enemy number one: fentanyl.What cops can't understand is immediately converted into a threat. Brushing aside medical and scientific expertise, cops are now claiming simply breathing the same air as fentanyl is the equivalent of a death sentence -- especially for cops serving bog standard drug warrants. Suddenly, sweeping a house during warrant service is a potential death sentence for officers, no matter how much they've outmanned and outgunned the opposition.The irrational fear of a drug that drug warriors apparently don't understand has resulted in all sorts of amazing claims by officers:
|