![]() |
by Tim Cushing on (#5RH0C)
An internal FBI document shared with Joseph Cox of Motherboard by Ryan Shapiro of Property of the People gives a little more insight into law enforcement's data grabs. The Third Party Doctrine -- ushered into law by the Supreme Court decision that said anything voluntarily shared with third parties could be obtained without a warrant -- still governs a lot of these collections.For everything else, there are warrant exceptions, plain view, inevitable discovery, a variety of "exigent circumstances," and reverse warrants that convert probable cause to "round up everyone and we'll decide who the 'usual suspects' are." Constitutional concerns still reside in this gray area, which means law enforcement will grab everything it can until precedent says it can't.The document [PDF] gives some insight into the FBI's CAST (Cellular Analysis Survey Team). It shows how much the FBI has access to, how much it has the potential to grab, and how much unsettled law aids in bulk collection of data the FBI can parse through to find suspects or, if enough fishing rods are present, decide whether it has anything to do with its investigative time.It's all in there, starting with "Basic Cellular Theory" and moving on to everything cell-related the FBI can get its data mitts on.
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-08-19 05:16 |
![]() |
by Karl Bode on (#5RGQ1)
So after the longest (and completely unexplained) delay in FCC and NTIA history, last week the Biden administration finally got around to fully staffing the nation's top telecom regulator. While the selection of fairly centrist Jessica Rosenworcel is expected to make it through the confirmation process, the same can't be said of Gigi Sohn, a popular consumer advocate:
|
![]() |
by Timothy Geigner on (#5RG3Y)
Well, this is moving fast. We had just been discussing Nintendo's announcement for a new tier of Nintendo Switch Online services. While there are several extras added in for the $50 per year tier, a 150% increase in cost from the base subscription, the real star of the show was supposed to be the Nintendo 64 games that are now included in it. As we discussed, however, the list of N64 games on offer is very limited and there are all kinds of problems with the games that are offered. Those problems include graphical issues, scaling issues, controller lag issues, controller mapping issues, and multiplayer lag. You know... everything. When you put all of that side by side with Nintendo's concentrated efforts to obliterate emulation sites from the internet, the end result is that Nintendo decided to deprive the public of pirated classic games in order to sell them a vastly inferior product.But it's one thing for me, known Nintendo-detractor Timothy Geigner, to say all of that. What really matters is how the paying public will react to all of this. Well, if you're looking for a canary in the Nintendo coal mine, we can look to the video Nintendo put on YouTube announcing the new tier of NSO.
|
![]() |
by Mike Masnick on (#5RG0X)
Many, many years ago on Techdirt, I wrote a lot about the idea of advertising being content (and content being advertising). The general idea was that, without captive audiences any more, you had to make your advertising into really good content that people would actually like, rather than find it annoying and intrusive.I still think this is an important insight, but with the rise of a limited number of internet giants and (more importantly) Google and Facebook focusing on better and better ad targeting, most of the focus on ads these days hasn't been so much on "advertising is content," so much as "advertising is creepily and slightly inaccurately targeted, but you're going to live with it, because that's all you've got." Still, every once in a while, we're reminded of this idea about how advertising could actually be good content in its own right. Ironically, the example I'm about to share here... comes from Google. But we'll get to that in a moment.In the midst of the pandemic, I discovered the amazing UK TV show Taskmaster, which is too good to describe. It's sort of a cross between a typical UK panel show, a game show with incredibly ridiculous tasks, and.... I dunno. Perhaps it's the anti-Squid Game. It does involve people playing games, but it's hilarious, not deadly. You kind of have to watch it to understand how good it is, and then you kind of can't stop watching it. Thankfully, the first eight seasons are fully and officially available on YouTube outside the UK. The show is now on Season 12, but it appears that they've stopped posting full copies of the new shows to YouTube -- perhaps because the show has become so popular they're looking for a licensing deal with some streaming service or something (their content is advertising!) For what it's worth, an attempt at a US spinoff version completely flopped because it was terrible, though other spinoffs, such as in New Zealand, have gone well. If you want to get a sense of the show, Season 1, Episode 1 is hard to beat, though it's missing some things that became standard in later seasons. If you want to watch the show once it really hit it's stride, seasons 4, 5 and 7 are probably the best.Anyway, while they're not posting full episodes any more, the Taskmaster YouTube page continues to post new content -- usually clips or outtakes from the show. But last week they also posted two ads. They're clearly labeled as ads -- but they're brand new Taskmaster content, advertising Google's Lens feature. They involve a couple of Taskmaster contestants competing in tasks that require the use of Google Lens to compete -- and they're just as entertaining as the show, while actually showing off this Google product I didn't even know existed. Since I've seen basically every available episode of Taskmaster, I thought this is a fantastic example of content as advertising, so I'm posting them here -- though I'll admit I'm not quite as sure how well they work for people who don't watch the show:I still think the advertising world would be better -- and less hated -- if there was a focus on making sure your advertising was actually good content that was entertaining or interesting. It may not be as exciting as trying to tweak the AI to squeeze an extra 0.000003 cents per user with more targeted ads, but it might make for a nicer world.
|
![]() |
by Glyn Moody on (#5RFXM)
Back in 2014, Spain brought in a Google tax. It was even worse than Germany's, which was so unworkable that it was never applied fully. Spain's law was worse because it created a right for publishers to be paid by "news aggregators" that was "inalienable". That is, publishers could not waive that right -- they had to charge. That negated the point of Creative Commons licenses, which are designed to allow people to use material without paying. Subsequent research showed that Spain's snippet tax was a disaster for publishers, especially the smaller ones.Unsurprisingly, in response Google went for the nuclear option, and shut down Google News in Spain at the end of 2014. Seven years later -- a lifetime on the Internet -- Google News is returning to Spain:
|
![]() |
by Mike Masnick on (#5RFTE)
A month ago, I highlighted how Facebook seemed uniquely bad attaking a long term view and publicly committing to doing things that are good for the world, but bad for Facebook in the short run . So it was a bit surprising earlier this week to see Facebook (no I'm not calling it Meta, stop it) announce that it was shutting down its Face Recognition system and (importantly) deleting over a billion "face prints" that it had stored.The company's announcement on this was (surprisingly!) open about the various trade-offs here, both societally and for Facebook, though (somewhat amusingly) throughout the announcement Facebook repeatedly highlights the supposed societal benefits of its facial recognition.
|
![]() |
by Tim Cushing on (#5RFQY)
Three years ago, cops in New Hampshire arrested Robert Frese for the crime of… insulting some cops. Frese, facing a suspended sentence for smashing the window of a neighbor's car, left a comment on a local news site, claiming Exeter Police Chief William Shupe was a "coward" who was "covering for dirty cops."Instead of taking his online lumps like a true public servant, Shupe had Frese arrested, apparently hoping to use an outdated criminal defamation law to trigger Frese's suspended sentence to get him locked up for the next couple of years.That effort failed. The ACLU got involved, as did the state's Justice Department, which said Frese's comment was not unlawful. Shortly thereafter, the criminal defamation charge was dropped. A wrongful arrest lawsuit followed, netting Frese a $17,500 settlement. The police, of course, admitted no wrongdoing. Instead, they continued to claim the arrest was lawful and supported by a law that managed to make its way from the 15th century to the 21st century almost untouched.It isn't over yet. Frese, along with the ACLU, is still trying to get that law stricken from the books, hopefully in the form of a ruling finding it unconstitutional. Given what's happening during oral arguments in front of the First Circuit Court of Appeals, it looks like Frese may be on his way to victory. Here's Thomas Harrison of Courthouse News Service with more details.
|
![]() |
by Daily Deal on (#5RFQZ)
The Ultimate Learn to Code Bundle has 80+ hours of immersive, multifaceted, programming education. When it comes to web programming, there are a lot of tools you can learn and use to make your workflow more efficient and your products more exciting. This bundle will give you a crash course into a variety of languages and tools, plus how to integrate them, giving you an excellent foundation for further learning. Courses cover Ruby on Rails 5, HTML5, CSS3, JavaScript, Python, iOS 10, and more. The bundle is on sale for $39. Use the coupon code SAVE15NOV and get an additional 15% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5RFMX)
There seem to be a lot of "myths" about big internet companies that don't stand up to that much scrutiny, even as they're often accepted as common knowledge. There's the idea that Facebook's algorithm remains in place only because it makes Facebook more money (Facebook's own internal research suggests otherwise), or that disinformation goes viral on social media first (a detailed study showed cable news is a much bigger vector of virality).Another big one is that YouTube "radicalizes" people via its algorithm. There are lots of stories about how someone went on to YouTube to watch, like, video game clips, and within a week had become an alt-right edge lord troll shouting Trump slogans or whatever. Hell, this was a key plot point in the Social Dilemma, in which the young boy in the fictionalized sitcom family starts watching some videos on his phone, and a week later is participating in an extremist political rally that turns into a riot.However, a very thorough recent study (first highlighted by Ars Technica) found that there's really not much evidence to support any of this narrative. From the abstract:
|
![]() |
by Tim Cushing on (#5RF90)
At long last, Clearview has finally had its AI tested by an independent party. It has avoided doing this since its arrival on the facial recognition scene, apparently content to bolster its reputation by violating state privacy laws, making statements about law enforcement efficacy that are immediately rebutted by law enforcement agencies, and seeing nothing wrong with scraping the open web for personal information to sell to government agencies, retailers, and bored rich people.Kashmir Hill reports for the New York Times that Clearview joined the hundreds of other tech companies that have had their algorithms tested by the National Institute of Standards and Technology.
|
![]() |
by Timothy Geigner on (#5REQN)
So, here's the thing: I get accused of picking on Nintendo a whole lot. But please know, it's not that I want to pick on them, it's just that they make it so damned easy to. I'm a golfer, okay? If I have a club in my hand and suddenly a ball on a tee appears before me, I'm going to hit that ball every time without hesitation. You will recall that a couple of years back, Nintendo opened up a new front on its constant IP wars by going after ROM and emulation sites. That caused plenty of sites to simply shut themselves down, but Nintendo also made a point of getting some scalps to hang on its belt, most famously in the form of RomUniverse. That site, which very clearly had infringing material not only on the site but promoted by the site's ownership, got slapped around in the courts to the tune of a huge judgement against, which the site owners simply cannot pay.But all of those are details and don't answer the real question: why did Nintendo do this? Well, as many expected from the beginning, it did this because the company was planning to release a series of classic consoles, namely the NES mini and SNES mini. But, of course, what about later consoles? Such as the Nintendo 64?Well, the answer to that is that Nintendo has offered a Nintendo Switch Online service uplift that includes some N64 games that you can play there instead.
|
![]() |
by Tim Cushing on (#5REJP)
Subjecting students to surveillance tech is nothing new. Most schools have had cameras installed for years. Moving students from desks to laptops allows schools to monitor internet use, even when students aren't on campus. Bringing police officers into schools to participate in disciplinary problems allows law enforcement agencies to utilize the same tech and analytics they deploy against the public at large. And if cameras are already in place, it's often trivial to add facial recognition features.The same tech that can keep kids from patronizing certain retailers is also being used to keep deadbeat kids from scoring free lunches. While some local governments in the United States are trying to limit the expansion of surveillance tech in their own jurisdictions, governments in the United Kingdom seem less concerned about the mission creep of surveillance technology.
|
![]() |
by Leigh Beadon on (#5REE3)
The documents revealed by Facebook whistleblower Frances Haugen are full of important information — but the media hasn't been doing the best job of covering that information and all its nuances. There are plenty of examples of reporters taking one aspect out of context and presenting it in the worst possible light, while ignoring the full picture. This week, we're joined by law professor Kate Klonick to discuss the media's failings in covering the Facebook Papers, and the unwanted outcomes this could produce.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Mike Masnick on (#5REB7)
Sometimes it's difficult to get across to people "the scale" part when we talk about the impossibility of content moderation at scale. It's massive. And this is why whenever there's a content moderation decision that you dislike or that you disagree with, you have to realize that it's not personal. It wasn't done because someone doesn't like your politics. It wasn't done because of some crazy agenda. It was done because a combination of thousands of people around the globe and still sketchy artificial intelligence are making an insane number of decisions every day. And they just keep piling up and piling up and piling up.Evelyn Douek recently gave a (virtual) talk at Stanford on The Administrative State of Content Moderation, which is worth watching in its entirety. However, right at the beginning of her talk, she presented some stats that highlight the scale of the decision making here. Based on publicly revealed transparency reports from these companies, in just the 30 minutes allotted for her talk, Facebook would take down 615,417 pieces of content, YouTube would take down 271,440 videos, channels, and comments, and TikTok would take down 18,870 videos. And, also, the Oversight Board would receive 48 petitions to review a Facebook takedown decision.And, as she notes, that's only the take down decisions. It does not count the "leave up" decisions, which are also made quite frequently. Facebook is not targeting you personally. It is not Mark Zuckerberg sitting there saying "take this down." The company is taking down over a million pieces of content every freaking hour. It's going to make mistakes. And some of the decisions are ones that you're going to disagree with.And, to put that in perspective, she notes that in its entire history, the US Supreme Court has decided a grand total of approximately 246 1st Amendment cases, or somewhere around one per year. And, of course, in those cases, it often involves years of debates, and arguments, and briefings, and multiple levels of appeals. And sometimes the Supreme Court still gets it totally wrong. Yet we expect Facebook -- making over a million decisions to take content down every hour -- to somehow magically get it all right?Anyway, there's a lot more good stuff in the talk and I suggest you watch the whole thing to get a better understanding of the way content moderation actually works. It would be helpful for anyone who wants to opine on content moderation to not just understand what Douek is saying, but to really internalize it.
|
![]() |
by Karl Bode on (#5RE9H)
If you recall, the U.S. spent much of 2020 freaking out about TikTok's threat to privacy, while oddly ignoring that the company's privacy practices are pretty much the international norm (and ignoring a whole lot of significantly worse online security and privacy problems we routinely do nothing about). More recently there was another moral panic over the idea that TikTok was turning children into immoral thieving hellspawn as part of the Devious licks meme challenge.Now, one initial report by the Wall Street Journal has alleged that teen girls are watching so many TikToks by other girls with Tourette Syndrome, they're developing tics. The idea that you can develop an entirely new neurological condition by watching short form videos sounds like quite a stretch, but the claim has already bounced around the news ecosystem for much of the year:
|
![]() |
by Daily Deal on (#5RE9J)
The Z2 headphones earned their name because they feature twice the sound, twice the battery life, and twice the convenience of competing headphones. This updated version of the original Z2s comes with a new all-black design and Bluetooth 5.0. Packed with TREBLAB's most advanced Sound2.0 technology with aptX and T-Quiet active noise-cancellation, these headphones deliver goose bump-inducing audio while drowning out unwanted background noise. These headphones are on sale for $79. We're having an early holiday sale this week, so use the code SAVE15NOV to get an additional 15% off of your purchase storewide.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5RE7J)
Because Netflix is big, it draws lawsuits. It has been sued for defamation, copyright infringement, and, oddly, defamation via use of a private prison's logo in a fictional TV show. It has also been sued for supposedly contributing to a teen's suicide with its series "13 Reasons Why," which contained a lot of disturbing subject matter that teens deal with daily, like bullying, sexual assault, and -- most relevant here -- suicide. The final episode of the first season contained a suicide scene, one that was removed by Netflix two years after the show debuted.While undeniably a tragedy, the attempt to blame Netflix for this teen's suicide is severely misguided. The lawsuit filed by the teen's survivors alleges Netflix had a duty to warn viewers of the content (content warnings were added to the show a year after its release) and it failed to do so, making it indirectly liable for this death.Netflix is now trying to get this lawsuit dismissed using California's anti-SLAPP law because, as it argues persuasively, this is all about protected speech, no matter how the plaintiffs try to portray it as a consumer protection issue. (h/t Reason)Netflix's anti-SLAPP motion [PDF] points out this isn't the first time teen suicide has been depicted in pop culture, nor is it the first time people have tried to sue creators over the content of their creations. None of those lawsuits have been successful.
|
![]() |
by Mike Masnick on (#5RDYK)
I've mocked the NY Times for its repeated failures to understand basic facts about internet regulations such as Section 230 -- but the organization also deserves credit when it gets things (mostly) right. Last week, Farhad Manjoo wrote up a great opinion piece noting that, even if you agree that Facebook is bad, most regulatory proposals would make things much, much worse.He focuses on the blatantly unconstitutional "Health Misinformation Act" from Senators Klobuchar and Lujan, which would appoint a government official to declare what counts as health misinformation, and then remove Section 230 protections from any website that has such content. As Manjoo rightly notes, it's as if everyone has forgotten who was President from 2017 to early 2021 and hasn't considered what he or someone like him would do with such powers:
|
![]() |
by Glyn Moody on (#5RDAV)
Techdirt has noted in the past that if public libraries didn't exist, the copyright industry would never allow them to be created. Publishers can't go back in time to change history (fortunately). But the COVID pandemic, which largely stopped people borrowing physical books, presented publishers with a huge opportunity to make the lending of newly-popular ebooks by libraries as hard as possible.A UK campaign to fight that development in the world of academic publishing, called #ebookSOS, spells out the problems. Ebooks are frequently unavailable to institutions to license as ebooks. When they are on offer, they can be ten or more times the cost of the same paper book. The #ebookSOS campaign has put together a spreadsheet listing dozens of named examples. One title cost £29.99 as a physical book, and £1,306.32 for a single-user ebook license. As if those prices weren't high enough, it's common for publishers to raise the cost with no warning, and to withdraw ebook licenses already purchased. One of the worst aspects is the following:
|
![]() |
by Mike Masnick on (#5RD7K)
On Tuesday and Wednesday of this week I'm excited to be participating in an event that the Knight Foundation is putting on, curated by law professors Eric Goldman and Mary Anne Franks, entitled Lessons From the First Internet Ages. The event kicks off with the release of reflections on "the first internet age" from various internet luminaries who were there -- but also, most importantly talking about what they might have done differently. I'm going to have a writeup at some future date on my response to the pieces, but I highly recommend checking them all out. In particular, I'll recommend the pieces by Senator Ron Wyden, Nicole Wong, Brewster Kahle, Vint Cerf, Reid Hoffman, and Tim Berners-Lee. I also think that the interviews Eric Goldman conducted with Matthew Prince and Nirav Tolia were both fascinating.Just to give you a snippet, Wyden's article really is excellent:
|
![]() |
by Tim Cushing on (#5RD4R)
Putting cops in schools is never a good idea. It only encourages school administrators to hand over discipline problems to the "proper authorities," which is what administrators used to be until the addition of law enforcement on campus.Having cops on tap also appears to encourage parents to demand a law enforcement response to disciplinary problems. That's what happened at a school in Hawaii, where a 10-year-old student was arrested over a drawing another student's parent didn't like. The school -- and the police department that performed the arrest -- are on the verge of being sued by the student and the ACLU.Here's a brief summary of the incident from the ACLU:
|
![]() |
by Mike Masnick on (#5RD1Z)
If you thought that Trump's new Truth Social website's potential legal problems with its apparent failure to abide by the license on the open source code it seems to be using would be the worst legal problems facing the site, well, you underestimated The Donald. There's been plenty of talk about the SPAC deal that valued the company at billions of dollars through one of those reverse merger IPOs. But, now the NY Times is reporting that the way the deal was done may have violated securities laws. So on brand.
|
![]() |
by Tim Cushing on (#5RD20)
Encrypted messaging app Signal is slowly educating federal prosecutors on the meaning of the idiom "blood from a stone." Usually this refers to someone who is judgment-proof (or extortion-proof or whatever), since you can't take money a person doesn't have.This would be the digital equivalent. Prosecutors in California have tried three times this year to obtain data on Signal users that Signal never collects or retains. Issue all the subpoenas you want, Signal says, but don't expect anything to change. We can't give you what we don't have. (h/t Slashdot)
|
![]() |
by Daily Deal on (#5RCZH)
The 2021 Complete Video Production Super Bundle has 10 courses to help you learn all about video production. Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about this in-depth video production bundle. Video content is fast changing from the future marketing tool to the present, and here you'll learn how to make professional videos on any budget. From the absolute basics, to screenwrighting to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high quality video content. You'll learn how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5RCXP)
A recent episode of the Reply All podcast, Absolutely Devious Lick, touched on a bunch of interesting points regarding the never-ending debates about social media, content moderation, and how it's supposedly damaging the kids these days. It's worth listening to the entire episode, but it begins by talking about a very slightly viral TikTok "challenge" which became known as Devious Licks -- lick being slang for something you stole. It started with a kid putting up a TikTok video of him holding a box of disposable masks, suggesting that he had stolen it from the school. Because school kids sometimes do stupid things to copy their stupid friends, a few others posted similar videos, including one early one of a kid taking a soap dispenser. And then there were some stories of it spreading and people going more extreme, because, you know, kids. But it didn't seem to spread that far initially.But, of course, the thing became a lot more viral after mainstream media jumped on it with their typical "OMG, the kids these days" kind of coverage, starting with the New York Times, CNN, USA Today and then like every random local news jumping on the trend to tsk tsk about the kids these days.Prominent grandstanding Senator, Richard Blumenthal called on TikTok execs to testify over all of this, which turned into another ridiculous Senate hearing in which old men yell at social media execs about how they're harming kids.But, scratch the surface a little, and beyond a few dumb kids, this seems a lot more like adults over-reacting and freaking out, and making the story go much, much, much more viral than it did in reality. Indeed, the only news organization I've seen that recognized that most of this was a moral panic by adults was Curbed, which noted that, yes, there was some actual vandalism done by kids, but a lot of it seemed to be kids mocking the trend as well:
|
![]() |
by Karl Bode on (#5RCTX)
By now it's fairly clear the Facebook leaks showcase a company that prioritized near-mindless international growth over the warnings of their own experts. They also show a company that continues to painfully struggle to be even marginally competent at scale, whether we're talking about content moderation or rudimentary customer service. While this has become an all-encompassing media spectacle, the real underlying story isn't particularly unique. It's just a "growth for growth's sake" mindset, where profit and expansion trumped all reason. It just happens to be online, and at unprecedented international scale.One thing I haven't seen talked a lot about is the fact that if you look back a few years, an awful lot of of folks in developing nations saw these problems coming a mile away, long before their Western counterparts. For a decade, international activists warned repeatedly about the perils of Facebook's total failure to understand the culture/regulations/language/norms of the countries they rapidly flooded into. Yet bizarrely, Frances Haugen's PR team somehow excluded most of these countries when it came time to recently release access to the Facebook files:
|
![]() |
by Leigh Beadon on (#5RCTY)
This week, our first place winner on the insightful side is That Anonymous Coward with a comment about the baffling valuation of Trump's broken social media venture:
|
![]() |
by Leigh Beadon on (#5RB9Z)
Five Years AgoThis week in 2016, the American Bar Association prepared a report on Trump's libel bully behavior, but was scared out of publishing it... for fear of being sued by Trump. Meanwhile, the Clinton campaign was trying to deny the authenticity of the emails released by Wikileaks. The ACLU was taking the government to court over unreleased FISA opinions, new documents revealed how AT&T hoped to profit by helping law enforcement spy on the public, and Yahoo was trying to get permission to talk about the email scanning it did for the government. Via a FOIA request from the EFF, we learned more about why the copyright office misrepresented copyright law to the FCC — unsurprisingly, it was at the behest of the MPAA, which began trying to mock EFF over the story. Also, this was the week that Oracle officially appealed Google's fair use win over API copyright.Ten Years AgoThis week in 2011, Apple was continuing its trademark war against anyone using an apple in a logo, Universal was using copyright to go after parodies, and we learned that ICE seized 20 domain names for the NFL over the weekend. The House of Representatives was rushing out its version of PROTECT IP, which emerged with the even more ridiculous name of the E-PARASITES Act — and it was really, really bad and required some interesting flip-flopping on behalf of its sponsors. Amidst this, we published a three-part series on the many historical "killers" of the movie industry (part one, part two, part three).Fifteen Years AgoThis week in 2006, we looked at how the DMCA takedown process was working for the now-Google-owned YouTube, and at the way many of the weak lawsuits against Google (and not just for YouTube) were strengthening the company's position by giving it easy wins. We also looked at how copy protection and walled gardens were making music annoying for consumers while, amidst lots of fearmongering about the internet hurting music sales, Weird Al was crediting it with his album's success. We saw another way to rebuff the RIAA's lawsuits — by hiring a lawyer that has beaten them before — while the association also failed in its attempts to legally scour the hard drives of its lawsuit targets. Meanwhile, a newspaper was called out by a lawyer for thinking it could unilaterally tell people that "fair use is not applicable" to uses of its content, leading the publication to change the language on its site — but when the lawyer who called them out sent a thank you note, they threatened to sue him for defamation.
|
![]() |
by Tim Cushing on (#5RANY)
A recent sanctions case against a Maryland prosecutor -- one involving a murder case and the use of crime scene forensic "science" -- highlights the real world effects of the FBI's tendency to overstate the certainty of forensic findings in court. It also highlights another long-running problem in the justice system: the withholding of exculpatory evidence by prosecutors who seem willing to take any "win," whether it's earned or not. (h/t Steve Klepper)The sanctions order [PDF] recounts the case, which dates back to 1981. Joseph Cassilly was the prosecutor that handled the case of Diane Becker, who was found murdered in her trailer. Her boyfriend, Joseph Hudson, was found dead on a nearby road. He had been shot several times.There were two suspects: Deno Kanaras and John Huffington. Both were indicted for the murder. Kanaras admitted to being present when the murder occurred, but claimed Huffington killed the two people. Kanaras testified against Huffington and Huffington was convicted on two counts of felony murder in 1982. He appealed and his conviction was reversed.Huffington was tried again in 1983. Kanaras was, again, the only eyewitness and testified against Huffington. By the time this trial occurred, Kanaras had already been convicted of Becker's murder. This time around, the prosecution brought in an FBI agent to testify, Michael Malone. Attempting to prove Huffington was at the scene of Becker's murder, Agent Malone offered this testimony:
|
![]() |
by Karl Bode on (#5RAH7)
Coming from telecom, I'm painfully aware of the perils of the "deregulation is a panacea" mindset. For literally thirty straight years, the idea that deregulation results in some kind of miraculous Utopia informed U.S. telecom policy, resulting in a sector that was increasingly consolidated and uncompetitive. In short, the entirety of U.S. telecom policy (with the short lived sporadic exception) has been to kowtow to regional telecom monopolies. Efforts to do absolutely anything other than that (see: net neutrality, privacy, etc.) are met with immeasurable hyperventilation and predictions of imminent doom.So I think the U.S. telecom sector holds some valuable lessons in terms of regulatory competency and accountability. No, you don't want regulators that are heavy-handed incompetents. And yes, sometimes deregulation can help improve already competitive markets (which telecom most certainly isn't). At the same time, you don't want regulators who are mindless pushovers, where companies are keenly aware they face zero repercussions for actively harming consumers, public safety, or the health of a specific market.Enter Tesla, which is finally facing something vaguely resembling regulatory scrutiny for its bungled and falsehood-filled deployment of "full self-driving" technology. As crashes and criticism pile up, Tesla is arguably facing its first ever instance of regulatory accountability in the face of more competent government hires and an ongoing investigation into the company's claims by the NHTSA. This all might result in no meaningful or competent regulatory action, but the fact that people aren't sure of that fact is still a notable sea change.This, in turn, has automatically resulted in a new tone at Tesla that more reflects a company run by actual adults:
|
![]() |
by Mike Masnick on (#5RACN)
Over the last few years, we've seen more and more focus on using content moderation efforts to stamp out anything even remotely upsetting to certain loud interest groups. In particular, we've seen NCOSE, formerly "Morality in Media," spending the past few years whipping up a frenzy about "pornography" online. They were one of the key campaigners for FOSTA, which they flat out admitted was step one in their plan to ban all pornography online. Recently, we've discussed how MasterCard had put in place ridiculous new rules that were making life difficult for tons of websites. Some of the websites noted that Mastercard told them it was taking direction from... NCOSE. Perhaps not surprisingly, just recently, NCOSE gave MasterCard its "Corporate Leadership Award" and praised the company for cracking down on pornography (which NCOSE considers the same as sex trafficking or child sexual abuse).Of course, all of this has some real world impact. We've talked about how eBay, pressured to remove such content because of FOSTA and its payment processors, has been erasing LGBTQ history (something, it seems, NCOSE is happy about). And, of course, just recently, OnlyFans came close to prohibiting all sexually explicit material following threats from its financial partners -- only to eventually work out a deal to make sure it could continue hosting adult content.But all of this online prudishness has other consequences. Scott Nover, over at Quartz, has an amazing story about how museums in Vienna are finding that images of classic paintings are being removed from all over the internet. Though, they've come up with a somewhat creative (and surprising) solution: the museums are setting up OnlyFans accounts, since the company is one of the remaining few which is able to post nude images without running afoul of content moderation rules. Incredibly, the effort is being run by Vienna's Tourist Board.
|
![]() |
by Tim Cushing on (#5RA9P)
ProtonMail offers encrypted email, something that suggests it's more privacy conscious than others operating in the same arena. But, being located in Switzerland, it's subject to that country's laws. That has caused some friction between its privacy protection claims and its obligations to the Swiss government, which, earlier this year, rubbed French activists the wrong way when their IP addresses were handed over to French authorities.The problem here wasn't necessarily the compliance with local laws. It was Proton's claim that it did not retain this information. If it truly didn't, it would not have been able to comply with this request. But it is required by local law to retain a certain amount of information. This incident coming to light resulted in ProtonMail altering the wording on its site to reflect this fact. It no longer claimed it did not retain this info. The new statement merely says this info "belongs" to users and Proton's encryption ensures it won't end up in the hands of advertisers.Proton's retention of this data was the result of a Swiss data retention law and, more recently, a revocation of its ability to operate largely outside the confines of this law. Terry Ang of Jurist explains the how and why behind Proton's relinquishment of IP addresses to French authorities, which resulted in its challenge of the applicability of the local data retention law.
|
![]() |
by James Boyle on (#5RA7Y)
There are a few useful phrases that allow one instantly to classify a statement. For example, if any piece of popular health advice contains the word "toxins," you can probably disregard it. Other than, "avoid ingesting them." Another such heuristic is that if someone tells you "I just read something about §230..." the smart bet is to respond, "you were probably misinformed." That heuristic can be wrong, of course. Yet in the case of §230 of the Communications Decency Act, which has been much in the news recently, the proportion of error to truth is so remarkable that it begs us to ask, "Why?" Why do reputable newspapers, columnists, smart op-ed writers, legally trained politicians, even law professors, spout such drivel about this short, simple law?§230 governs important aspects of the liability of online platforms for the speech made by those who post on them. We have had multiple reasons recently to think hard about online platforms, about their role in our politics, our speech, and our privacy. §230 has figured prominently in this debate. It has been denounced, blamed for the internet's dysfunction, and credited with its vibrancy. Proposals to repeal it or drastically reform it have been darlings of both left and right. Indeed, both former President Trump and President Biden have called for its repeal. But do we know what it actually does? Here's your quick quiz: Can you tell truth from falsity in the statements below? I am interested in two things. Which of these claims do you believe to be true, or at least plausible? How many of them have you heard or seen?The §230 Quiz: Which of These Statements is True? Pick all that apply.A.) §230 is the reason there is still hate speech on the internet. The New York Times told its readers the reason "why hate speech on the internet is a never-ending problem," is "because this law protects it." quoting the salient text of §230.B.) §230 forbids, or at least disincentivizes, companies from moderating content online, because any such moderation would make them potentially liable. For example, a Wired cover story claimed that Facebook had failed to police harmful content on its platform, partly because it faced "the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand."C.) The protections of §230 are only available to companies that engage in "neutral" content moderation. Senator Cruz, for example, in cross examining Mark Zuckerberg said, "The predicate for Section 230 immunity under the CDA is that you're a neutral public forum. Do you consider yourself a neutral public forum?"D.) §230 is responsible for cyberbullying, online criminal threats and internet trolls. It also protects against liability when platforms are used to spread obscenity, child pornography or for other criminal purposes. A lengthy 60 Minutes program in January of this year argued that the reason that hurtful, harmful and outright illegal content stays online is the existence of §230 and the immunity it grants to platforms. Other commentators have blamed §230 for the spread of everything from child porn to sexual trafficking.E.) The repeal of §230 would lead online platforms to police themselves to remove hate speech and libel from their platforms because of the threat of liability. For example, as Joe Nocera argues in Bloomberg, if §230 were repealed companies would "quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn't be able to hurl anti-Semitic slurs."F.) §230 is unconstitutional, or at least constitutionally problematic, as a speech regulation in possible violation of the First Amendment. Professor Philip Hamburger made this claim in the pages of the Wall Street Journal, arguing that the statute is a speech regulation that was passed pursuant to the Commerce Clause and that "[this] expansion of the commerce power endangers Americans' liberty to speak and publish." Professor Jed Rubenfeld, also in the Wall Street Journal, argues that the statute is an unconstitutional attempt by the state to allow private parties to do what it could not do itself — because §230 "not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so."What were your responses to the quiz? My guess is that you've seen some of these claims and find plausible at least one or two. Which is a shame because they are all false, or at least wildly implausible. Some of them are actually the opposite of the truth. For example, take B.) §230 was created to encourage online content moderation. The law before §230 made companies liable when they acted more like publishers than mere distributors, encouraging a strictly hands-off approach. Others are simply incorrect. §230 does not require neutral content moderation — whatever that would mean. In fact, it gives platforms the leeway to impose their own standards; only allowing scholarly commentary, or opening the doors to a free-for-all. Forbidding or allowing bawdy content. Requiring identification of posters or allowing anonymity. Filtering by preferred ideology, or religious position. Removing posts by liberals or conservatives or both.What about hate speech? You may be happy or sad about this but, in most cases, saying bad things about groups of people, whether identified by gender, race, religion, sexual orientation or political affiliation, is legally protected in the United States. Not by §230, but by the First Amendment to the US Constitution. Criminal behavior? §230 has an explicit exception saying it does not apply to liability for obscenity, the sexual exploitation of children or violation of other Federal criminal statutes. As for the claim that "repeal would encourage more moderation by platforms," in many cases it has things backwards, as we will see.Finally, unconstitutional censorship? Private parties have always been able to "censor" speech by not printing it in their newspapers, removing it from their community bulletin boards, choosing which canvassers or political mobilizers to talk to, or just shutting their doors. They are private actors to whom the First Amendment does not apply. (Looking at you, Senator Hawley.) All §230 does is say that the moderator of community bulletin board isn't liable when the crazy person puts up a libelous note about a neighbor, but also isn't liable for being "non neutral" when she takes down that note, and leaves up the one advertising free eggs. If the law says explicitly that she is neither responsible for what's posted on the board by others, nor for her actions in moderating the board, is the government enlisting her in pernicious, pro-egg state censorship in violation of the First Amendment?! "Big Ovum is Watching You!"? To ask the question is to answer it. Now admittedly, these are really huge bulletin boards! Does that make a difference? Perhaps we should decide that it does and change the law. But we will probably do so better and with a clearer purpose if we know what the law actually says now.It is time to go back to basics. §230 does two simple things. Platforms are not responsible for what their posters put up, but they are also not liable when they moderate those postings, removing the ones that break their guidelines or that they find objectionable for any reason whatsoever. Let us take them in turn.1.) It says platforms, big and small, are not liable for what their posters put up, That means that social media, as you know it — in all its glory (Whistleblowers! Dissent! Speaking truth to power!) and vileness (See the internet generally) — gets to exist as a conduit for speech. (§230 does not protect platforms or users if they are spreading child porn, obscenity or breaking other Federal criminal statutes.) It also protects you as a user when you repost something from somewhere else. This is worth repeating. §230 protects individuals. Think of the person who innocently retweets, or reposts, a video or message containing false claims; for example, a #MeToo, #BLM or #Stopthesteal accusation that turns out to be false or even defamatory. Under traditional defamation law, a person republishing defamatory content is liable to the same extent as the original speaker. §230 changes that rule. Perhaps that is good or perhaps that is bad — but think about what the world of online protest would be like without it. #MeToo would become… #Me? #MeMaybe? #MeAllegedly? Even assuming that the original poster could find a platform to post that first explosive accusation on. Without §230, would they? As a society we might end up thinking that the price of ending that safe harbor was worth it, though I don't think so. At the very least, we should know how big the bill is before choosing to pay it.2.) It says platforms are not liable for attempting to moderate postings, including moderating in non-neutral ways. The law was created because, before its passage, platforms faced a Catch 22. They could leave their spaces unmoderated and face a flood of rude, defamatory, libelous, hateful or merely poorly reasoned postings. Alternatively, they could moderate them and see the law (sometimes) treat them as "publishers" rather than mere conduits or distributors. The New York Times is responsible for libelous comments made in its pages, even if penned by others. The truck firm that hauled the actual papers around the country (how quaint) is not.So what happens if we merely repeal §230? A lot of platforms that now moderate content extensively for violence, nudity, hate speech, intolerance, and apparently libelous statements would simply stop doing so. You think the internet is a cesspit now? What about Mr. Nocera's claim that they would immediately have to tweak their algorithms or face liability for anti-Semitic postings? First, platforms might well be protected if they were totally hands-off. What incentive would they have to moderate? Second, saying hateful things, including anti-Semitic ones, does not automatically subject one to liability; indeed, such statements are often protected from legal regulation by the First Amendment. Mr. Nocera is flatly wrong. Neither the platform nor the original poster would face liability for slurs, and in the absence of §230, many platforms would stop moderating them. Marjorie Taylor Greene's "Jewish space-laser" comments manage to be both horrifyingly anti-Semitic and stupidly absurd at the same time. But they are not illegal. As for libel, the hands-off platform could claim to be a mere conduit. Perhaps the courts would buy that claim and perhaps not. One thing is certain, the removal of §230 would give platforms plausible reasons not to moderate content.Sadly, this pattern of errors has been pointed out before. In fact, I am drawing heavily and gratefully on examples of misstatements analyzed by tech commentators and public intellectuals, particularly Mike Masnick, whose page on the subject has rightly achieved internet-law fame. I am also indebted to legal scholars such as Daphne Keller, Jeff Kosseff and many more, who play an apparently endless game of Whack-a-Mole with each new misrepresentation. For example, they and people like them eventually got the New York Times to retract the ludicrous claim featured above. That story got modified. But ten others take its place. I say an "endless game of Whack-a-Mole" without hyperbole. I could easily have cited five more examples of each error. But all of this begs the question. Why? Rather than fight this one falsehood at a time, ask instead, "why is 'respectable' public discourse on this vital piece of legislation so wrong?"I am a law professor, which means I am no stranger to mystifying error. It appears to be an endlessly renewable resource. But at first, this one had me stumped. Of course, some of the reasons are obvious.
|
![]() |
by Daily Deal on (#5RA7Z)
The 2021 All-in-One Computer Science Bundle has 11 courses to teach you the essentials of computer science. You'll learn about Java, C++, Ruby on Rails, Python, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5RA2Q)
Daisuke Wakabayashi is a NY Times business reporter who seems to have a weird blind spot regarding Section 230 and online content moderation. Actually, perhaps "blind spot" isn't the right term for it. Two years ago, he was responsible for the massive full page, front page of the Business Section article falsely claiming that Section 230 was responsible for hate speech online. That's the one* where, infamously, the NY Times had to write a correction that completely undermined the headline of the article:
|
![]() |
by Karl Bode on (#5R9SM)
We've noted for years how corruption and apathy have resulted in the U.S. broadband sector being heavily monopolized, resulting in 83 million Americans having the choice of just one ISP. Tens of millions more Americans only have the choice of their local cable company or an apathetic local phone company that hasn't meaningfully upgraded their aging DSL lines in twenty years. On top of that problem is another problem: ISPs routinely bribe or bully apartment, condo, and other real estate owners into providing them cozy exclusivity arrangements that block broadband competition on a block by block level as well.While the FCC tried to ban such landlord/monopoly ISP shenanigans back in 2006, the rules were poorly crafted. As a result, this stuff still routinely happens, it's just called...something else (Susan Crawford wrote the definitive piece on this for Wired a few years back).For example ISPs will still strike deals with landlords banning any other ISP from advertising in the building. Sometimes landlords will still block competitor access to buildings entirely. Or they'll charge building access fees that unfairly penalize smaller competitors that may not be able to afford them. Or, because the rules prohibit ISPs from blocking access to an ISP's in building wiring, they'll just lease these building lines to the landlord, who'll then block access to competitors on behalf of the monopoly ISP (because technically the landlord now owns them). It's just noxious, weedy bullshit, and it's been going on for decades.While the FCC has recently made a little noise about revisiting the subject, any policymaking there could take years to sluggishly materialize. Like most broadband reform, feckless federal leadership has driven reform to take place at a faster cadence on the local level. In Oakland, for example, the city council just voted to effectively eliminate all landlord/ISP anticompetitive shenanigans to encourage broadband competition:
|
![]() |
by Tim Cushing on (#5R98Y)
The Chicago Police Department isn't willing to police itself. That much is apparent from the actions of its officers, which includes the department setting up an inner city "black site" where arrestees were separated from the rights and representation in order to coerce confessions.Nonexistent oversight has led directly to horrific outcomes, like unjustified killings and -- in just one jaw-dropping stat -- 100 misconduct accusations resulting from a single SWAT team raid of a wrong address.Will it ever get better? It seems unlikely. In Illinois, police accountability isn't even an afterthought. Some reforms were passed earlier this year but with a large concession to police departments: a partial burial of officers' misconduct records.
|
![]() |
by Timothy Geigner on (#5R94Z)
I don't know how in the world I missed this over the past couple of years, but I'm just in time to introduce you to a trademark lawsuit brought by the Dairy Queen people against W.B. Mason, an office supply and grocerer, over the latter's "Blizzard Water" brand. This story actually starts back in 2017, when W.B. Mason applied for a trademark on the water product. The company had actually been selling Blizzard Water since 2010, but the trademark application appears to have been what made Dairy Queen aware of that.And as a result, Dairy Queen filed suit over trademark claims. Dairy Queen argued in its initial complaint that its "Blizzard" mark is famous, which, yeah it is. It also argued that W.B. Mason selling Blizzard Water in its stores was going to cause confusion of origin or association with the buying public. Which... no, come on now. Ice cream is not water and no moron in any kind of hurry is going to confuse the two. Why does Dairy Queen say that product difference doesn't matter? Well, from the filing...
|
![]() |
by Tim Cushing on (#5R917)
Outside of Clearview's CEO Hoan Ton-That, it's unclear who truly likes or admires the upstart facial recognition tech company. In the short time since its existence was uncovered, Clearview has managed to turn itself into Pariah-in-Chief of a surveillance industry full of pariahs.Clearview hasn't endeared itself to the sources for its 10-billion image database, which are (in descending order) (1) any publicly-accessible website/social media platform, and (2) their users. The company has been sued (for violating state privacy laws) in the United States and politely asked to leave by Canada, which found Clearview's nonconsensual harvesting of personal info illegal.It has subpoenaed activists demanding access to their (protected by the First Amendment) conversations with journalists. It has made claims about law enforcement efficacy that have been directly contradicted by the namechecked police departments. It has invited private companies, billionaire tech investors, police departments in the US, and government agencies around in the world to test drive the software by running searches on friends, family members, and whoever else potential customers might find interesting/useful to compile a (scraped) digital dossier on.Clearview intends to swallow up all the web it can. Caroline Haskins' report for Business Insider (alt link here) catches Clearview's vice president of federal sales pretty much saying the only way to avoid being added to Clearview's database is to stop being online.
|
![]() |
by Mike Masnick on (#5R8WQ)
How big of an embarrassment is Robert F. Kennedy Jr.? Beyond all the anti-vax nonsense, he filed a ridiculously embarrassing lawsuit against Facebook because he was fact checked. The case was laughed out of court earlier this year. And now he's trying to abuse the courts to out a pseudonymous blogger for writing about how RFK Jr. spoke at a German rally last year that appeared to be organized by folks with ties to rightwing extremists.Paul Levy from Public Citizen, who is trying to stop RFK from succeeding in this bullshit effort, has a blog post with all of the details.
|
![]() |
by Cathy Gellis on (#5R8SG)
It keeps coming up, the all-too-common, and all-too-erroneous, trope that "you can't shout fire in a crowded theater." And it shouldn't, because, as a statement of law, it is completely wrong. It's wrong like saying it's legal to rob a bank. Or, perhaps more aptly, it's wrong like saying it's illegal to wear white after Labor Day. Of course such a thing is not illegal. It's a completely made-up rule and not in any way a reflection of what the law on expression actually is, or ever was. And it's not without consequence that so many people nevertheless mistakenly believe it to be the law, and in so thinking use this misapprehension as a basis to ignore, or even undermine, the otherwise robust protection for speech the First Amendment is supposed to afford.This post therefore intends to do two things: explain in greater detail why it is an incorrect statement of law, and also how incorrectly citing it as the law inherently poisons any discussion about regulating online speech by giving the idea of such regulation the appearance of more merit than the Constitution would actually permit. Because if it were true that no one could speak this way, then a lot of the proposed regulation for online speech would tend to make more sense and also raise many fewer constitutional issues, because if it were in fact constitutional to put these sorts of limits on speech, then why not have some of these other proposed limits too.But the "fire in a crowded theater" trope is an unsound foundation upon which to base any attempt to regulate online speech because it most certainly is NOT constitutional to put these sorts of limits on speech, and for good reason. To understand why, it may help to understand where the idea came from to end up in the public vernacular in the first place.Its origins date back to a little over a century ago when the Supreme Court was wrestling with several cases involving defendants having said things against government policy. In particular, President Wilson wanted the United States to enter what eventually became known as World War I, and he wanted to institute the draft in order to have the military necessary to do it. He got his way and these decisions have become part of our history, but at the time they were incredibly contentious policies, and people spoke out against them. The government found this pushback extremely inconvenient for generating the public support it needed. So it sought to silence the loudest voices speaking against it by prosecuting them for their messages.In the case of Schenck v. U.S., the defendants had been distributing flyers encouraging young men to resist being drafted. Yes, maybe sometimes you could say such things, the Court decided in upholding their convictions, but sometimes circumstances were such that such expression was no longer permissible. And the standard the Court used for deciding whether it was permissible or not was whether the speech presented a "clear and present danger."But this was a decision that has since been repudiated by the Court. Even Justice Oliver Wendell Holmes, who himself had written the decision, soon came to believe that the standard he articulated in Schenck for what speech could be punished reached too much speech, and he said as much in his dissent in the subsequent Abrams v. U.S. case, which was another one where the defendants were being prosecuted for ostensibly interfering with the government's wartime policy.Over time the rest of the Court joined him in the view that the First Amendment protected far more speech than its earlier decisions had allowed. Today the standard for what speech can be proscribed is the much narrower one articulated in Brandenburg v. Ohio, which said that speech can only be prosecuted if it is intended to incite "imminent lawless action" (read: a riot). It didn't mean provocative speech that might inflame feelings (even the speech of a KKK member was protected) but something far more precipitous. It is still left room for some speech to be unprotected, but this more restrained standard is much less likely to prohibit too much speech, as the standard from the Schenck decision had.In the wake of this later jurisprudence limiting what speech can be punished we can today more easily see, in hindsight, how the Schenck decision let the government suppress way too much speech, which is why the courts have moved away from it. For instance, war, and even the draft, remain controversial issues, but we now expect to be able to speak against them. Moving away from Schenck has made it easier to intuitively understand that the public has the right, and must have the right, to speak against the powerful, including the government. Even if well-intentioned in its actions the government may nonetheless be wrong to do what it wants to do, and what if those intentions are not noble? The greater the impact of the action the government wants to take, the greater the need to be able to speak against it – and often the greater the government impulse to shut that speech down.But what's key for this discussion here is that, despite the obvious error of the Schenck decision, people are still quoting a part of it as if it were still good law, as if it were EVER good law, and as if the part they are quoting did not itself perpetuate the same fundamental mistake of Schenck and put too much speech beyond the reach of First Amendment protection – which creates its own danger.Because it was in the Schenck decision where Justice Holmes included the casual mention about not being able to shout fire in a crowded theater. It was a line that itself was only dicta – in other words, it was never actually a statement of law but rather a separate musing used to illustrate the point of law the decision was trying to articulate. It wasn't what the case was about, or a statement that was in any other way given the robust consideration it should have been due if it were to truly serve as a legal benchmark. After all, what if the theater was actually on fire? Would saying so be illegal? Ironically, the people getting the law wrong by citing this line also tend to cite it incorrectly, because what is often omitted from the trope is that Holmes suggested the problem would only arise by "falsely" shouting fire. But even if this criteria were to be part of the rule, might not such a rule deter people from shouting alarm even if the theater was actually burning? Justice Holmes slipped that single line in the decision as a truth, but it was one he had only just suddenly conjured out of whole cloth. Nowhere did he address the implications of such a rule, or what it would mean when history mistook it as one.Because it is not the rule. It never was the rule. And it never, ever should be cited today as being the rule. From almost the moment it was judicially uttered it was already out of step with our understanding of what the First Amendment protects, and it has only gotten more and more detached as our understanding of the First Amendment's protection and purpose have gotten more precise. Modern jurisprudence has made clear that it is in only the rarest exception where freedom of speech can be impinged. It is therefore legally wrong to suggest otherwise, and even more legally ignorant to use this line to do it.Perhaps more importantly, though, even if it were the rule, it shouldn't be. Even back in the day of firetrap theaters stuffed with flammable celluloid it was of dubious value as a rule proscribing speech because sometimes speech really needs to be said, and thus it is important – maybe even of critical importance – that such speech not be chilled. The same is no less true today. Indeed, the more contentious public discourse is, and the higher the stakes, the more important it is that everyone be free, and FEEL free, to express themselves. We can't have people too scared to speak against misuses of power because they might run afoul of someone deciding that certain ideas should not be said. Yet it's that fear of recrimination that often is what silences people more than any specific sanction. And it's that fear that deprives the public of any benefit of whatever they had to say.Which is why our understanding of the First Amendment's protection has come to be far more broad and permissive than such a rule about crowded theaters would ever allow, because it is the only read of the Constitution that gives the First Amendment its true protective utility. When we speak of the law regarding free speech we speak of a law that understands it's better to have too much speech, including some that is valueless, than to risk losing the speech that has value. And it's a rule that applies just as much to speech online as off, as the Supreme Court also announced in Reno v. ACLU. All of our discussions about online speech should therefore start there, with that principle, and not around single throwaway lines from long discredited opinions that try to pretend that speech is ever so easily unprotected.
|
![]() |
by Daily Deal on (#5R8SH)
The 2021 Complete All-in-One Adobe Creative Cloud Suite Course Bundle has 12 courses designed to teach you about video editing, animations, photography, design, and more. Courses cover popular Adobe products like Lightroom, After Effects, Photoshop, and Adobe XD. The bundle is on sale for $34.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5R8J1)
Journalist Dan Froomkin, who is one of the most insightful commentators on the state of the media today, recently began a new effort, which he calls "let me rewrite that for you," in which he takes a piece of journalism that he believes misled readers, and rewrites parts of them -- mainly the headline and the lede -- to better present the story. I think it's a brilliant and useful form of media criticism that I figured I might experiment with as well -- and I'm going to start it out with a recent Washington Post piece, one of many the Post has written about the leaked Facebook Files from whistleblower Frances Haugen.The piece is written by reporters Jeremy Merrill and Will Oremus -- and I'm assuming that, like many mainstream news orgs, editors write the headlines and subheads, rather than the reporters. I don't know Merrill, but I will note that I find Oremus to be one of the most astute and thoughtful journalists out there today, and not one prone to fall into some of the usual traps that journalists fall for -- so this one surprised me a bit (though, I'm also using this format on an Oremus piece, because I'm pretty sure he'll take the criticism in the spirit intended -- to push for better overall journalism on these kinds of topics). The article's headline tells a story in and of itself: Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation, with a subhead that implies something similar: "Facebook engineers gave extra value to emoji reactions, including ‘angry,’ pushing more emotional and provocative content into users’ news feeds." There's also a graphic that reinforces this suggested point: Facebook weighted "anger" much more than happy reactions. And it's all under the "Facebook under fire" designation:Seeing this headline and image, it would be pretty normal for you to assume the pretty clear implication: people reacting happily (e.g. with "likes") on Facebook had those shows of emotions weighted at 1/5th the intensity of people reacting angrily (e.g. with "anger" emojis) and that is obviously why Facebook stokes tremendous anger, hatred and divisiveness (as the story goes).But... that's not actually what the details show. The actual details show that initially when Facebook introduced its list of five different "emoji" reactions (to be added to the long iconic "like" button), it weighted all five of them as five times as impactful as a like. That means that "love," "haha," "wow," and "sad" also were weighted at 5 times a single like, and identical to "angry." And while the article does mention this in the first paragraph, it immediately pivots to focus only on the "angry" weighting and what that means. When combined with the headline and the rest of the article, it's entirely possible to read the article and not even realize that "love," "sad," "haha," and "wow" were also ranked at 5x a single "like" and to believe that Facebook deliberately chose to ramp up promotion of "anger" inducing content. It's not only possible, it's quite likely. Hell, it's how I read the article the first time through, completely missing the fact that it applied to the other emojis as well.The article also completely buries how quickly Facebook realized this was an issue and adjusted the policy. While it does mention it, it's very much buried late in the story, as are some other relevant facts that paint the entire story in a very different light than the way many people are reading it.As some people highlighted this, Oremus pointed out that the bigger story here is "how arbitrary initial decisions, set by humans for business reasons, become reified as the status quo." And he's right. That is the more interesting story and one worth exploring. But that's not how this article is presented at all! And, his own article suggested the "reified as the status quo" part is inaccurate as well, though, again, that's buried further down in the story. The article is very much written in a way where the takeaway for most people is going to be "Facebook highly ranks posts that made you angry, because stoking divisiveness was good for business, and that's still true today." Except none of that is accurate.So... let's rewrite that, and try to better get across the point that Oremus claims was the intended point of the story.The original title, again is:
|
![]() |
by Karl Bode on (#5R8A9)
To gain regulatory approval for its $26 billion merger with Sprint, T-Mobile made numerous promises. One was that the deal would immediately create jobs (there've been 5,000 layoffs so far). Another was that the company would work closely with Dish Network to help them build a fourth wireless network that would replace Sprint, theoretically "fixing" the reduction in competition the deal created. As predicted, that plan isn't working out so well.T-Mobile was supposed to closely shepherd Dish's own network build over a period of 7 years, but the two companies have proven largely incapable of getting along. Recently, Dish accused T-Mobile of shutting down its 3G (CDMA) network (which Dish is currently using as it builds a 5G network) prematurely. T-Mobile in turn accused Dish of being too cheap to pay for 4G and 5G upgraded phones for its fairly tiny userbase. This week T-Mobile balked, issuing a hilariously passive aggressive press release saying T-Mobile would be leaving its 3G network on for a little bit longer because Dish was, effectively, incompetent:
|
![]() |
by Tim Cushing on (#5R84P)
Interpol has become a weapon. The international consortium of law enforcement does have a legitimate purpose. It's there to prevent people from escaping justice just because they've left the country where they've committed crimes. It's a worthy goal, but it's an easily abused mechanism.For instance, there's Turkey's government, which really wants to keep its top position on the "Most Journalists Jailed" list. It can't do this without the help of Interpol. In 2018, Turkey sent "red alert" notices to Interpol seeking journalists accused of whatever bullshit the government made up in hopes of having police forces in other nations round up the two self-exiled writers the government wanted to punish.The problem is ongoing. And it may be getting worse, according to this report from Josh Jacobs for The Guardian.
|
![]() |
by Timothy Geigner on (#5R7V3)
When I became a parent nearly seven years ago, I tasked myself with reading up on what to expect and how to be a good parent. Among many more important things, one prominent point of reading that led to many discussions in our household was screen time for children. And, as you might expect, that conversation has been ongoing to date. There are lots of theories out there about just how much screen time kids should get at certain ages, but the unifying force behind those theories typically is that it should be relatively limited. Some nations have even gotten into the game of forcing screen time limitations on children, or at least many have gone that route for targeted types of screen time, such as video games.But what if I told you that all that worrying done by parents, all the reading on the topic, and all of the effort put into it by governments is basically for nothing? Well, that seems to be the main conclusion reached by a new study that finds that the impact of recreational screen time on children is statistically negligible.
|
![]() |
by Copia Institute on (#5R7Q7)
Summary: A major challenge for global internet companies is figuring how to deal with different rules and regulations within different countries. This has proven especially difficult for internet companies looking to operate in China — a country in which many of the most popular global websites are blocked.In 2015, there was an article highlighting how companies like Evernote and LinkedIn had avoided getting blocked in China, mainly by complying with the Chinese government’s demands that they moderate certain content. In that article, LinkedIn’s then-CEO Jeff Weiner noted:"We're expecting there will be requests to filter content," he said. "We are strongly in support of freedom of expression and we are opposed to censorship," he said, but "that's going to be necessary for us to achieve the kind of scale that we'd like to be able to deliver to our membership."Swedish journalist Jojje Olsson tweeted the article when it came out. Six years later LinkedIn informed Olsson that his own LinkedIn profile would no longer be available in China after referencing the Tiananmen square massacre in his profile.
|
![]() |
by Tim Cushing on (#5R7KA)
The increasing reliance on tech by law enforcement means the increasing reliance on private companies. It's inevitable that tech developments will be adopted by government agencies, but a lot of this adoption has occurred with minimal oversight or public input. That lack of public insight carries forward to criminal trials, where companies have successfully stepped in to prevent defendants from accessing information about evidence, citing concerns about exposed trade secrets or proprietary software. In other cases, prosecutors have dropped cases rather than risk discussing supposedly sensitive tech in open court.Elizabeth Joh's new article for Science says corporations are making existing transparency and accountability problems in law enforcement even worse.
|
![]() |
by Mike Masnick on (#5R7DZ)
For reasons I don't fully understand, over the last few months, many critics of "big tech" and Facebook, in particular, have latched onto the idea that "the algorithm" is the problem. It's been almost weird how frequently people insist to me that if only social media got rid of algorithmically recommending stuff, and went back to the old fashioned chronological news feed order, all would be good in the world again. Some of this seems based on the idea that algorithms are primed to lead people down a garden path from one type of video to ever more extreme videos (which certainly has happened, though how often is never made clear). Some of it seems to be a bit of a kneejerk reaction to simply disliking the fact that these companies (which many people don't really trust) are making decisions about what you may and may not like -- and that feels kinda creepy.In the past few weeks, there's been a bit of a fever pitch on this topic, partly in response to whistleblower Frances Haugen's leak of documents, in which she argues that Facebook's algorithm is a big part of the problem. And then there's the recent attempt by some Democrats in Congress to take away Section 230 from algorithmically recommended information. As I noted, the bill is so problematic that it's not clear what it's actually solving.But underlying all of this is a general opinion that "algorithms" and "algorithmic recommendations" are inherently bad and problematic. And, frankly, I'm confused by this. At a personal level, the tools I've used that do algorithmic recommendations (mainly: Google News, Twitter, and YouTube) have been... really, really useful? And also pretty accurate over time in learning what I want, and thus providing me more useful content in a more efficient manner, which has been pretty good for me, personally. I recognize that not everyone has that experience, but at the very least, before we unilaterally declare algorithms and recommendation engines as bad, it might help to understand how often they're recommending stuff that's useful and helpful, as compared to how often they're causing problems.And, for all the talk about how Haugen's leaking has shown a light on the "dangers" of algorithms, the actual documents that she's leaked might suggest something else entirely. Reporter Alex Kantrowitz has reported on one of the leaked documents, regarding a study Facebook did on what happens when Facebook turns off the algorithmic rankings and... it was not pretty. But, contrary to common belief, Facebook actually made more money without the News Feed algorithm.
|
![]() |
by Tim Cushing on (#5R7AE)
Another set of plaintiffs insisting social media platforms have it in for "conservative" users have lost in court. The hook for this lawsuit is the (specious) claim that government officials' statements saying social media services should do more to curb the spread of misinformation (COVID, elections, etc.) somehow transformed these private companies into state actors. So, when they did decide to moderate the conspiracy theorists' accounts, it was ACTUAL CENSORSHIP.Here's Eric Goldman's summary of the case, the plaintiff, and the lawsuit's outcome.
|