![]() |
by Timothy Geigner on (#5QN9M)
Regular readers here will by now likely be familiar with Twitch streamer "Amouranth". She has made it onto our pages as part of the year-long mess that Amazon's Twitch platform appears to be making for itself, during which it has demonstrated its willingness to both treat its creative community quite poorly and fail to properly communicate that poor treatment to much of anyone at all. For instance, Twitch has temporarily banned or kept Amouranth from live-streaming several times, all likely due to the content of her streams. That content seems nearly perfectly designed to poke the line on Twitch's streaming guidelines, including so-called "hot tub streaming" and ASMR streams. Twitch has never been great about explaining the reasons for bans like these, but in the past it has at least linked to the offending content so that a streamer knows which videos were objectionable. But with some, including Amouranth, Twitch often times doesn't even bother doing that, such as when it demonetized Amouranth's videos without warning or explanation.So, while Twitch, quite frankly, now has far, far bigger issues on its hands, it's worth pointing out that Twitch has yet again banned Amouranth without warning or explanation. Though, it appears this time Twitch has some friends tagging along in Instagram and TikTok.
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-10-04 23:47 |
![]() |
by Tim Cushing on (#5QN7A)
Here's what you need to know about Alabama and its public records laws before we head to a depressing state Supreme Court opinion that makes everything worse:
|
![]() |
by Tim Cushing on (#5QN2P)
The Los Angeles Sheriff's Department is apparently incapable of being reformed. Over the years, the LASD has run an illegal prison informant program, one that culminated in an FBI investigation during which the LASD threatened FBI agents and federal witnesses.But what can one really expect from an agency willing to staff itself with statutory rapists, thieves, and cops considered unable to be hired anywhere else? The department is so internally corroded it has become the home for gangs and cliques of rogue officers who revel in deploying excessive force and violating rights.The only thing that can bring the LASD down is its critics and its oversight. The Department knows this and that's why it's taking action to clean itself up. Oh wait, it's the other thing.
|
![]() |
by Mike Masnick on (#5QN0R)
Regular readers know that I'm a believer in trying to get the big internet companies to embrace a more protocols over platforms approach, in which they're building something that others can then build on as well, and improve in their own ways (without fear of having the rug pulled out from under them). It's why I'm hopeful about Twitter working on just such a plan with its Bluesky project. Facebook, unfortunately, takes a very different view of the world.While I understand that some of Facebook's thinking around this is a reaction to what happened when it had created a more open platform for developers... and thenCambirdge Analytica happened, which has been an ongoing (if somewhat confusingly understood) black eye for the company. But Facebook has always been a bit skittish about how open it has wanted to be. Famously, it killed Power.com with an unfortunate reading of the CFAA when that company tried to create a universal login for various social media sites, and to help people not be locked in to just one social media site.But the latest example is really horrible. Louis Barclay has a write up in Slate about how Facebook banned him for life and threatened him with a lawsuit, because he created a tool to make everyone's Facebook experience better (though, less profitable for Facebook). The tool actually sounds quite nifty:
|
![]() |
by Cathy Gellis on (#5QMZ8)
Some lawmakers are candid about their desire to repeal Section 230 entirely. Others, however, express more of an interest to try to split this baby, and "reform" it in some way to somehow magically fix all the problems with the Internet, without doing away with the whole thing and therefore the whole Internet as well. This post explores several of the types of ways they propose to change the statute, ostensibly without outright repealing it.And several of the reasons why each proposed change might as well be an outright repeal, given each one's practical effect.But before getting into the specifics about why each type of change is bad, it is important to recognize the big reason why just about every proposal to change Section 230, even just a little bit, undermines it to the point of uselessness: because if you have to litigate whether Section 230 applies to you, you might as well not have it on the books in the first place. Which is why there's really no such thing as a small change, because if your change in any way puts that protection in doubt, it has the same debilitating effect on online platform services as an actual repeal would have.This was a key point we keep coming back to, including in suggesting that Section 230 operates more as a rule of civil procedure than any sort of affirmative subsidy (as it is often mistakenly accused of being). Section 230 does not do much that the First Amendment would not itself do to protect platforms. But the crippling expense of having to assert one's First Amendment rights in court, and potentially at an unimaginable scale given all the user-generated content Internet platforms facilitate, means that this First Amendment protection is functionally illusory if there's not a mechanism to get platforms out of litigation early and cheaply. It is the job of Section 230 to make sure they can, and that they won't have to worry about being bled dry in legal costs having to defend themselves even where, legally, they have a defense.Without Section 230 their only choice would be to not engage in the activity that Section 230 explicitly encourages: intermediating third party content, and moderating it. If they don't moderate it then their services may become a cesspool, but if the choice they face is either to moderate, or to potentially be bankrupted in litigation (or even, as in the case of FOSTA, potentially prosecuted), then they won't. And as for intermediating content, if they can get into legal trouble for allowing the wrong content, then they will either host less user-generated content, or not be in the business of hosting any user content at all. Because if they don't make these choices, they set themselves up to be crushed by litigation.Which is why it is not even the issue of ultimate liability that makes lawsuits such an existential threat to an Internet platform. It's just as bad if the lawsuit that crushes them is over whether they were entitled to the statutory liability protection needed to avoid the lawsuit entirely. And we know lawsuits can have that annihilating effect when platforms are forced to litigate these questions. One conspicuous example is Veoh Networks, a video-hosting service who today should still be a competitor to YouTube. But it isn't a competitor because it is no longer a going concern. It was obliterated by the costs of defending its entitlement to assert the more conditional DMCA safe harbor defense, even though it won! The Ninth Circuit found the platform should have been protected. But by then it was too late; the company had been run out of business, and YouTube lost a competitor that, today, the marketplace still misses.It would therefore be foolhardy and antithetical to lawmakers' professed interest in having a diverse ecosystem of Internet services were they to do anything to make Section 230 similarly conditional, thereby risking even further market consolidation than we already have. But that's the terrible future that all these proposals tempt.More specifically, here's why each type of proposal is so infirm:Liability carve-outs. One way lawmakers propose to change Section 230 is to deny its protection to specific forms of liability that may arise in user content. A variety of these liability carve-outs have been proposed, and all require further scrutiny. For instance, one popular carve-out with lawmakers is trying to make Section 230 useless against claims of liability for posts that allegedly violate anti-discrimination laws. But while on first glace such a carve-out may seem innocuous, we know that it's not. And one way it's not is because people eager to discriminate themselves have shown themselves keen to try to force platforms to help them do it, including by claiming that anti-discrimination laws serve to protect their own efforts to discriminate. So far they have largely been unable to conscript platforms into enabling their hate, but if Section 230 no longer protects platforms from these forms of liability, then racists will finally be able to succeed by exploiting that gap.These carve-outs also run the risk of making it harder for people who have been discriminated against from finding a place to speak out about it, since it will force platforms to be less willing to offer space to speech that they might find themselves forced to defend, because even if the speech were defensible just having to answer for it can be ruinous for the platform. We know that they will feel forced to turn away all sorts of worthy and lawful speech if that's what they need to do to protect themselves, because we've seen this dynamic play out as a result of the few carve-outs Section 230 has had from the start. For example, if the thing wrong with the user expression was that it implicated an intellectual property right, then Section 230 didn't protect the platform from liability in their users' content. Now, it turns out that platforms have some liability protection via the DMCA, but this protection is weaker and more conditional than Section 230, which is why we see all the swiss cheese online with videos and other content so often removed – even in cases when they were not actually infringing – because taking it down is the only way platforms can avoid trouble and not run the risk of going the way of Veoh Networks themselves.Such an outcome is not good for encouraging free expression online, which was a main driver behind passing Section 230 originally, and it isn't even good for the people these carve outs were ostensibly intended to help, which we saw with FOSTA, which was an additional liability carve-out more recently added. With FOSTA, instead of protecting people from sexual exploitation, it led to platforms taking away their platform access, which drove them into the streets, where they got hurt or killed. And, of course, it also led to other perfectly lawful content disappearing from the Internet, like online dating and massage therapy ads, since FOSTA had made it impossibly risky for the platforms to continue to facilitate it.It's already a big problem that there are even just these liability carve-outs. If Section 230 were to be changed in any way, it should be changed to remove them. But in any case, we certainly shouldn't be making any more if Section 230 is still to maintain any utility in protecting the platforms we need to facilitate online user expression.Transactional speech carve-outs. As described above, one way lawmakers are proposing to change Section 230 is to carve out certain types of liability that might attach to user-generated content. Another way is to try to carve out certain types of user expression itself. And one specific type of user expression in lawmakers' crosshairs (and also some courts') is transactional speech.The problem with this invented exception to Section 230 is that transactional speech is still speech. "I have a home to rent" is speech, regardless of whether it appears on a specialized platform that only hosts such offers, or more general purpose platforms like Craigslist or even Twitter where such posts are just some of the kinds of user expression enabled.Lawmakers seem to be getting befuddled by the fact that some of the more specialized platforms may earn their money through a share of any consummated transaction their user expression might lead to, as if this form of monetization were somehow meaningfully distinct from any other monetization model, or otherwise somehow waived their First Amendment right to do what basically amounts to moderating speech to the point where it is the only type of user content they allow. And it is this apparent befuddlement that has led to attempts by lawmakers to tie Section 230 protection to certain monetization models and go so far as to eliminate it for certain ones.Even these proposals were carefully drafted such proposals they would only end up chilling e-commerce by forcing platforms to use less-viable monetization models. But what's worse is that the current proposals are not being carefully drafted, and so we end up seeing bills end up threatening the Section 230 protection of any platform with any sort of profit model. Which, naturally, they all need to have in some way. After all, even non-profit platforms need some sort of income stream to keep the lights on, but proposals like these threaten to make it all but impossible to have the money needed for any platform to operate.Mandatory transparency report demands. As we've discussed before, it's good for platforms to try to be candid about their moderation decisions and especially about what pressures forced them to make these decisions, like subpoenas and takedown demands, because it helps highlight when these instruments are being abused. Such reports are therefore a good thing to encourage.But encouragement is one thing; requiring them is another, but that's what certain proposals try to do in conditioning Section 230 protection to the publication of these reports. And they are all a problem. Making transparency reports mandatory is an unconstitutional form of compelled speech. Platforms have the First Amendment right to be arbitrary in their moderation practices. We may prefer them to make more reasoned and principled decisions, but it is their right not to. But they can't enjoy that right if they are forced to explain every decision they've made. Even if they wanted to, it may be impossible, because content moderation is happening at scale, which inherently means it will never be perfect, and it also may be ill-advised to be fully transparent because it teaches bad actors how to have their systems.Obviously a platform could still refuse to produce the reports as these bills would prescribe. But if that decision risks the statutory protection the platform depends on to survive, then it is not really much of a decision. It finds itself compelled to speak in the way that the government requires, which is not constitutional. And it also would end up impinging on that freedom to moderate, which both the First Amendment and Section 230 itself protect.Mandatory moderation demands. But it isn't just transparency in moderation decisions that lawmakers want. Some legislators are running straight into the heart of the First Amendment and demanding that they get to dictate how platforms get to do any of their moderation by conditioning Section 230 protection to the platforms making these decisions the way the government insists.These proposals tend to come in two political flavors. While they are generally utterly irreconcilable – it would be impossible for any platform to simultaneously satisfy both of them at the same time – they each boil down to the same unconstitutional demand.Some of these proposals reflect legislative outrage at platforms for some of the moderation decisions they've made. Usually they condemn platforms for having removed certain speech or even banned certain speakers, regardless of how poor their behavior or how harmful the things those speakers said. This condemnation leads lawmakers who favor these speakers and their speech to want to take away the platforms' right to make these sorts of moderation decisions by, again, conditioning Section 230 on their continuing to leave these speakers and speech up on these systems. The goal with these proposals is to set up the situation where it is impossible for platforms to continue to exercise their First Amendment discretion in moderation and possibly take them down, lest they lose the protection they depend on to exist. Which is not only unconstitutional compulsion, but also itself ultimately voids the part of Section 230 that expressly protects that discretion, since it's discretion that platforms can no longer exercise.On the flip side, instead of conditioning Section 230 on not removing speakers or speech, other lawmakers would like to condition Section 230 on requiring platforms to kick off certain speakers and speech (and sometimes even the same ones that the other proposals are trying to keep up). Which is just as bad as the other set of proposals, for all the same reasons. Platforms have the constitutional right to make these moderation choices however they choose, and the government does not have the right, per the First Amendment, to force them to make them in any particular way. But if their critical Section 230 protection can be taken away if they don't moderate however the sitting political power demands at the moment, then that right has been impinged and Section 230 rendered a nullity.Algorithmic display carve-outs. Algorithmic display has become a target for many lawmakers eager to take a run at Section 230. But as with every other proposed reform, changing Section 230 so that it no longer applies to platforms using algorithmic display would end up obliterating the statute for just about everyone. And it's not quite clear that lawmakers proposing these sorts of changes quite realize this inevitable impact.And part of the problem seems to be that they don't really understand what an algorithm is, or how commonly they are used. They seem to regard it as something nefarious, but there's nothing about an algorithm that inherently is. The reality is that nearly every platform uses software in some way to handle the display of user-provided content, and algorithms are just the programming logic coded into the software giving it the instructions for how to display that content. Moreover, these instructions can even be as simple as telling the software to display the content chronologically, alphabetically, or some other relevant way the platform has decided to render content, which the First Amendment protects. After all, a bookstore can decide to shelve books however it wants, including in whatever order or with whatever prominence it wants. What these algorithms do is implement these sorts of shelving decisions, just as applied to the online content a platform displays.If algorithms were to end up banned by making the Section 230 protection platforms need to host user-generated content contingent on not using them, it would make it impossible for platforms to actually render any of that content. They either couldn't do it technically, if they were to abide by this rule withholding their Section 230 protection, or legally if that protection were to be withheld because they used this display. Such a rule would also represent a fairly significant change to Section 230 itself by gutting the protection for moderation decisions, since those decisions are often implemented by an algorithm. In any case, conditioning Section 230 on not using algorithms is not a small change but one that would radically upend the statutory protection and all the online services it enables.Terms of Service carve-outs. One idea (which is, oddly, backed by Facebook, even though it needs Section 230 to remain robust in order to defeat litigation like this) is that Section 230 protection should be contingent on platforms upholding their terms of service. As with these other proposals, this one is also a bad idea.First of all, it negates the utility of Section 230 protection by making its applicability the subject of litigation. In other words, instead of being protected from litigation, platforms will now have to litigate whether they are protected from litigation, which means they aren't really protected at all.It also fails to understand what terms of service are for. Platforms have them in order to limit their liability exposure. There's no way that they are going to write them in a way that has the effect of increasing their liability exposure.The way they are generally written now is to put potentially wayward users on notice that if they don't act consistently with these terms of service, the service may be denied them. They aren't written to be affirmative promises to do anything because they can't be affirmative promises – content moderation at scale is impossible to do perfectly, so it would be foolish for platforms to obligate themselves to do the impossible. But that's what changing Section 230 in this way would do, create this obligation if platforms are to retain their needed protection.This pipe dream that some seem to have, that if only platforms did more moderation in accordance with their terms of service as currently written, everything would be perfect and wonderful is hopelessly naïve. After all, nothing about how the Internet works is nearly that simple. Nevertheless, it is fine to want platforms to do as much as they can to meet the aspirational goals they've articulated in their terms of service. But changing Section 230 in this way won't lead them to. Instead it will make it legally unsafe for platforms to even articulate any such aspirations and thus less likely to meet any of them. Which means that regulators won't get more of what they seek with this sort of proposal, but less.Pre-emption elimination. One of the key clauses that makes Section 230 useful is its pre-emption provision. This is the provision that tells states that they cannot rejigger their own state laws in ways that would interfere with the operation of Section 230. The reason it is so important is because it gives the platforms the certainty they need to be able to benefit from the statute's protection. For it to be useful they need to know that it applies to them and that states have no ability to mess with it.Unfortunately we are already seeing increasing problems with state and local jurisdictions attempting to ignore this pre-emption provision, and courts even sometimes letting them. But on top of that there are proposals in Congress to deliberately undermine it. In fact, with FOSTA, it already has been undermined, with individual state governments now able to impose liability directly on platforms for their user activity, no matter how arbitrarily.We see with the moderation bills an illustration of what is wrong with states getting to mess with Section 230 and make its protection suddenly conditional – and therefore effectively useless. Given our current political polarity, the problem should be obvious: how is any platform going to reconcile the moderation demands of a Red State with the moderation demands of a Blue State? What is an inherently interstate Internet platform to do? Whose rules should they follow? What happens to them if they don't?Congress put in the pre-emption provision because it knew that platforms could not possibly comply with all the myriad rules and regulations that every state, county, city, town, and locality might develop to impose liability on platforms. So it told them all to butt out. It's a mistake to now gut that provision if Section 230 is going to still have any value in making it safe for platforms to continue to do their job enabling the Internet.
|
![]() |
by Daily Deal on (#5QMZ9)
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $59.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Tim Cushing on (#5QMXG)
Somehow, "TSA" stands for "The Terrorists Won." In exchange for endless inconveniences, inconsistently deployed security measures, and a steady stream of intrusive searches and rights violations, we've obtained a theatrical form of security that's more performative than useful.Since screeners continue to miss nearly every piece of contraband traveling through security checkpoints, the TSA has opted to buy even more screening equipment. Apparently, it's hoping no one will say it's not doing anything about these failures. It is throwing money at the problem. That's something. Unfortunately, it doesn't appear to be solving it.A new report [PDF] from the DHS Inspector General says the shiny new scanners the agency bought to "address capability gaps in carry-on bag screening" aren't doing that now, and perhaps never will. The TSA obtained 300 computed tomography (CT) scanners, which were supposed to detect a broader range of explosives and make flying slightly less inconvenient by allowing passengers to keep their fluids and their laptops in their respective bags. The ultimate goal is safer flying, less hassle at checkpoints, and faster throughput. It has achieved none of these goals, despite more than $1 billion being obligated towards the rollout of CT scanners nationwide.Instead of meeting its own four-factor test for essential capabilities, the TSA's new toys fell short of every self-imposed metric.
|
![]() |
by Karl Bode on (#5QMBJ)
Given the seemingly endless privacy scandals that now engulf the tech and telecom sectors on a near-daily basis, many consumers have flocked to virtual private networks (VPN) to protect and encrypt their data. One study found that VPN use quadrupled between 2016 and 2018 as consumers rushed to protect data in the wake of scandals, breaches, and hacks.Unfortunately, many consumers are flocking to VPNs under the mistaken impression that such tools are a near-mystical panacea, acting as a sort of bulletproof shield that protects them from any potential privacy violations on the internet. Not only is that not true (ISPs, for example, have a universe of ways to track you anyway), many VPN providers are even less ethical than privacy-scandal-plagued companies or ISPs.After a repeated few years where VPN providers were found to be dodgy or tracked user data when they claimed they didn't, professionals have shifted their thinking on recommending even using one. While folks requiring strict security over wireless may still benefit from using a reputable VPN provider, experts say the landscape has changed. Improvements in the overall security of ordinary browsing (bank logins, etc.), plus the risk of choosing the wrong VPN provider, means that many people may just be better off without one:
|
![]() |
by Leigh Beadon on (#5QHTM)
Five Years AgoThis week in 2016, the Trump's campaign was reacting to the leaked pages of his 1995 tax returns by threatening to sue the New York Times, and also reacting to some ads from the Clinton campaign by threatening to sue them, too — while at the same time, the campaign was facing its own bogus threat from the Phoenix Police over imagery of cops in an ad. The big story, though, was the revelation that Yahoo had secretly built email scanning software under pressure from the feds. This led to basically every other tech company rapidly denying that they'd done the same, followed by Yahoo itself issuing a tone-deaf non-denial denial of the report. The media was very confused about the story, with the New York Times and Reuters claiming totally different explanations for the email scanning, and over the course of the week even more disagreements and confusion arose.Ten Years AgoThis week in 2011, countries around the world were signing ACTA and finally admitting that it meant they'd have to change their copyright laws, while Brazil was drafting its own anti-ACTA framework for the internet. The Supreme Court declined to consider an appeals court ruling that properly stated music downloads are not public performances, though this didn't mean (as some claimed) that downloading had been legalized. Meanwhile, another judge dismissed a lawsuit over streaming video, but mostly avoided the larger copyright questions, and we saw a set of good rulings against copyright trolls, and one bad one.This was also the week that Steve Jobs died at age 56.Fifteen Years AgoThis week in 2006, Facebook was getting a start on its soon-to-be-tradition of threatening people who make useful third-party tools. Amazon was abandoning its attempt to make an early version of something like Street View, and Wal-Mart was abandoning its much more stupid attempt to offer a MySpace clone. The fight between Belgian news publishers and Google was continuing, while the copyright fight over My Sharona was dragging in Yahoo, Amazon and Apple. And the big news — though it was still just a rumor with lots of conflicting information going around, making it hard to tell if it was true — was that Google was planning to buy YouTube for $1.6-billion.
|
![]() |
by Tim Cushing on (#5QH4W)
Drug dogs are man's best friend, if that man happens to be The Man. "Probable cause on four legs" is the unofficial nickname of these clever non-pets, which give signals only their handlers can detect which give cops permission to perform searches that otherwise would require a warrant.They're normally seen at traffic stops and border checkpoints, but they're also used to sniff other places cops want to search but don't want to get a warrant to do so. This has led to a few legal issues for law enforcement, with courts occasionally reminding them that a dog sniff is a search and, if the wrong place is sniffed, it's a constitutional violation.The top court in Connecticut has curtailed the use of drug dogs in certain areas, finding that sniffs are still searches and these searches are unreasonable under the state constitution if performed in certain areas -- namely outside the doors of motel rooms. (via FourthAmendment.com)In this case, police officers allowed their dog to sniff at doors of motel rooms until it alerted on a door. Using this quasi-permission, officers entered the room and found contraband. The government argued that even if it was a search, it was performed in a place (a hotel or motel) where citizens would have a lowered expectation of privacy, considering the fact the rooms are only rented, utilized for only a short time, and accessible by hotel staff.In a really well-written opinion [PDF], the court reminds the government that a lowered expectation of privacy is not the same as a nonexistent expectation of privacy. And, more importantly, it reminds them that, while motel rooms may not have the sanctity of people's permanent homes, it is a home away from home and afforded more protection than, say, a car parked on the curb of a public road.The court addresses all of the government's arguments and finds none of them persuasive.
|
![]() |
by Karl Bode on (#5QH1C)
Having covered telecom for a long time, I've lost track of the times I've watched some befuddled lawmaker shocked by the content of their own bill. Usually, that's because they outsourced the writing of it to their primary campaign contributors, which in telecom is usually AT&T, Verizon, Comcast, and Charter. Sometimes they're so clueless to what their "own" bill includes they'll turn to lobbyists in the middle of a hearing to seek clarity. This is, of course, outright corruption. But we tend to laugh it off and normalize it, and the press generally refuses to accurately label it corruption.There are endless parallels when it comes to the energy sector. Like this week, when Texas lawmakers were shocked to realize their recent state energy bill failed to require that Texas natural gas companies harden their infrastructure for climate change--despite the fact their own bill included giant loopholes to that effect.In the wake of the disastrous and deadly climate-related crisis in Texas last winter, the state passed several bills purporting to fix the problem. Many, like Senate Bill 3, largely just punted the can down the road, urging for a mapping of Texas's existing energy infrastructure, and giving the Texas Railroad Commission 180 days to finalize its weatherization rules. None of the solutions, of course, challenged entrenched energy providers, or tackled the core of the problem in Texas: an almost mindless deference to wealthy local energy executives.At a recent hearing in Texas, lawmakers blasted both the Texas Railroad Commission and local natural gas companies when they realized the latter had failed to weatherize their infrastructure with winter looming. The problem was that their own legislation provided the loopholes that made this possible:
|
![]() |
If You Want To Know Why Section 230 Matters, Just Ask Wikimedia: Without It, There'd Be No Wikipedia
by Glyn Moody on (#5QGZW)
It sometimes seems that Techdirt spends half its time debunking bad ideas for reforming or even repealing Section 230. In fact, so many people seem to get the law wrong that Mike was moved to write a detailed post on the subject with the self-explanatory title "Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act". It may be necessary (and tiresome) work rebutting all this wrongness, but it's nice for a change to be able to demonstrate precisely why Section 230 is so important. A recent court ruling provides just such an example:
|
![]() |
by Karl Bode on (#5QGWE)
When Mike introduced our latest Greenhouse series on content moderation at the infrastructure layer, he made it abundantly clear this was a particularly thorny and complicated issue. While there's been a relentless focus on content moderation at the so-called "edge" of the internet (Google, Facebook, and Twitter), less talked about is content moderation at the "infrastructure" layers deeper in the stack. That can include anything from hosting companies and domain registrars, to ad networks, payment processors, telecom providers, and app stores.If and how many of these operations should be engaged in moderating content, and the peril of that participation being exploited and abused by bad actors and governments the world over, made this Greenhouse series notably more complicated than our past discussions on privacy, more traditional forms of content moderation, or broadband in the COVID era.We'd like to extend a big thank you to our diverse array of wonderful contributors to this panel, who we think did an amazing job outlining the complexities and risks awaiting policymakers on what's sure to be a long road forward:
|
![]() |
by Cathy Gellis on (#5QGR9)
A few weeks ago I woke up one day to find the Lake Tahoe region on fire and the New York region underwater. Meanwhile the Supreme Court had just upended decades if not centuries of Constitutional law. But I could learn about none of it from watching local news because Locast had shut down overnight following a dreadful decision by a district court a few days before.Locast was a service similar to the now-extinct Aereo, although with a few critical legal distinctions necessary for it to avoid Aereo's litigation-obliterated fate. But the gist was the same: it was another rent-an-antenna service that "captures over-the-air ('OTA') broadcast signals and retransmits them over the internet, enabling viewers to stream live television on their preferred internet-connected viewing device" [p. 1-2 of the ruling]. And, like Aereo, it is yet another useful innovation now on the scrapheap of human history.Absolutely nothing about this situation makes any sense. First, and least importantly, I'm not sure that Locast shutting down wasn't an overreaction to a decision so precariously balanced on such illusory support. Then again, no one wants to be staring down the barrel of potentially ruinous copyright lawsuit under the best of circumstances, but especially not when the judge has arbitrarily torn up all your high cards. Getting out of the game at least helps limit what the damage will be if the tide doesn't eventually turn.More saliently, it makes absolutely no sense that the plaintiffs, who were mostly some of the largest television networks, would even bring this lawsuit. Services like Locast are doing them a favor by helping ensure that their channels actually get watched. As I've pointed out before, the only reason I ever watch their affiliates is thanks to Locast. Like many others, I don't have my own cable subscription, nor my own antenna. So I need a service like Locast to essentially rent me one so that I can watch the over-the-air programming on the public airwaves I'd otherwise be entitled to see. Suing Locast for having rented me that antenna basically says that they don't actually want viewers. And that declaration should come as a shock to their advertisers, because the bottom line is that without services like Locast I’m not watching their ads.It also makes no sense for copyright law to want to discourage services like these. Not only are these public airwaves that people should be able to receive using whatever tools they choose, but cutting people off from this programming doesn't advance any of the ideals that copyright law exists to advance. Or, more practically, it deprives people of shared mass media sources and drives everyone instead towards more balkanized media we must find for ourselves online. With lawmakers increasingly concerned about people having to fend for themselves in building their media diets, it seems weird for law to effectively force them to. Especially after decades of policymaking deliberately designed to make sure that broadcast television could be a source of common culture, it would be a fairly radical shift for policy to suddenly obstruct that goal.As it turns out, though, Congress has not wanted to completely abandon bringing broadcast television to the public. Not even through copyright law, where there's actually a provision, at 17 U.S.C. Section 111(a)(5) ("Certain Secondary Transmissions Exempted"), that recognizes rebroadcasting services as something worth having and articulates the dimensions that such a service would have to meet to not run afoul of the rest of the copyright statute. The salient language:
|
![]() |
by Daily Deal on (#5QGRA)
The Web Development Crash Course Bundle has 6 courses to help you become a master programmer. You'll learn about C++, Bootstrap, Modern OpenGL, HTML, and more. The courses will teach you how to create websites, how to program for virtual reality, how to create your own games, and how to create your own apps. The bundle is on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5QGMR)
There have been a bunch of slightly wacky court rulings of late, and this recent one from magistrate judge Zia Faruqui definitely is up there on the list of rulings that makes you scratch your head. The case involves the Republic of Gambia seeking information on Facebook accounts that were accused of contributing to ethnic genocide of the Rohingya in Myanmar. This situation was -- quite obviously -- horrible, and it tends to be the go-to story for anyone who wants to show that Facebook is evil (though I'm often confused about how people often seem more focused on blaming Facebook for the situation than the Myanmar government which carried out the genocide...). Either way, the Republic of Gambia is seeking information from Facebook regarding the accounts that played a role in the genocide, as part of its case at the International Court of Justice.Facebook, which (way too late in the process) did shut down a bunch of accounts in Myanmar, resisted demands from Gambia to hand over information on those accounts noting, correctly, that the Stored Communications Act likely forbids it from handing over such private information. The SCA is actually pretty important in protecting the privacy of email and messages, and is one of the rare US laws on the books that is actually (for the most part) privacy protecting. That's not to say it doesn't have its own issues, but the SCA has been useful in the past in protecting privacy.The ruling here more or less upends interpretations of the SCA by saying once an account is deleted, it's no longer covered by the SCA. That's... worrisome. The full ruling is worth a read, as you'll know you'll be in for something of a journey when it starts out:
|
![]() |
by Karl Bode on (#5QGDY)
So for a long time the FCC has made "fighting robocalls" one of their top priorities. Though with Americans still receiving 132 million Robocalls every single day, you may have noticed that these efforts don't usually have the impact they claim. Headlines about "historic" or "record" FCC robocall fines usually overshadow the agency's pathetic failure to collect on those fines, or the fact that thanks to recent Supreme Court rulings, the agency is boxed in as to which kind of annoying calls and spam texts it can actually police.Which brings us to last week, when the agency announced yet another major action, this time proposed rule updates that would make it harder on the "gateway" companies (which connected overseas callers to U.S. phone networks) and the smaller phone operators that are the origins of so much of the problem. While the FCC's plan made a lot of headlines, experts were quick to note that most of the improvements were still far from being implemented:
|
![]() |
by Tim Cushing on (#5QG73)
Private searches that uncover contraband can be handed off to law enforcement without the Fourth Amendment getting too involved. Restrictions apply, of course. For instance, a tech repairing a computer may come across illicit images and give that information to law enforcement, which can use what was observed in the search as the basis for a search warrant.What law enforcement can't do is ask private individuals to perform searches for it and then use the results of those searches to perform warrantless searches of their own. A Ninth Circuit Appeals Court case [PDF] points out another thing law enforcement can't do: assume (or pretend) a private search has already taken place in order to excuse its own Fourth Amendment violation. (h/t Rianna Pfefferkorn)Automated scanning of email attachments led to a series of events that culminated in an unlawful search. Here's the court's description of this case's origination:
|
![]() |
by Timothy Geigner on (#5QFZN)
Readers here will know that we've followed the trademark and copyright lawsuit filed by the estate of Dr. Seuss against ComicMix LLC, creators of the mashup book Oh, the Places You'll Boldly Go! The entire thing has been a multi-year rollercoaster designed to be serpentine, with ComicMix arguing that the mashup book was transformative and covered by fair use, and winning on that front, only to have the copyright portion of the argument overturned on appeal. Go and read Cathy Gellis' writeup on the appeal; it's incredibly detailed and informative.But if anyone was hoping to see this case progress up the federal court ranks, they will be both disappointed and sad. Disappointed because the parties have now settled the case with ComicMix agreeing to acknowledge that the book did, in fact, infringe on Suess' copyrights.
|
![]() |
by Timothy Geigner on (#5QFWP)
Readers here will know that we've followed the trademark and copyright lawsuit filed by the estate of Dr. Seuss against ComicMix LLC, creators of the mashup book Oh, the Places You'll Boldly Go! The entire thing has been a multi-year rollercoaster designed to be serpentine, with ComicMix arguing that the mashup book was transformative and covered by fair use, and winning on that front, only to have the copyright portion of the argument overturned on appeal. Go and read Cathy Gellis' writeup on the appeal; it's incredibly detailed and informative.But if anyone was hoping to see this case progress up the federal court ranks, they will be both disappointed and sad. Disappointed because the parties have now settled the case with ComicMix agreeing to acknowledge that the book did, in fact, infringe on Suess' copyrights.
|
![]() |
by Mike Masnick on (#5QFPZ)
Earlier this year we were excited to see the Filecoin Foundation give the Internet Archive its largest donation ever, to help make sure that the Internet Archive is both more sustainable as an organization, and that the works it makes available will be more permanently available on a more distributed, decentralized system. The Internet Archive is a perfect example of the type of organization that can benefit from a more distributed internet.Another such organization is the Freedom of the Press Foundation, which, among its many, many projects, maintains and develops SecureDrop, the incredibly important tool for journalists and whistleblowers, which was initially developed in part by Aaron Swartz (as DeadDrop). So it's great to see that the Freedom of the Press Foundation has now announced the largest donation it has ever received, coming from the Filecoin Foundation for the Distributed Web (the sister organization of the Filecoin Foundation):
|
![]() |
by Tim Cushing on (#5QFJF)
When the First Amendment meets a law enforcement officer's ability to be offended on the behalf of the general public, the First Amendment tends to lose.The ability to be a proxy offendee affords officers the opportunity to literally police speech. They're almost never in the right when they do this. But they almost always get away with it. That's why a Texas sheriff felt comfortable charging a person sporting a "FUCK TRUMP" window decal with disorderly conduct. That's why a Tennessee cop issued a citation for a stick-figures-in-mid-coitus "Making my family" window decal.And that's why a Florida law enforcement officer pulled over and arrested a man for the "I EAT ASS" sticker on his window. According to Deputy Travis English's arrest report, he noticed the sticker and assumed it violated the state's obscenity law. He was, of course, wrong about this. But he called his supervisor for clarification and was assured (wrongly) that this sticker violated the law.He offered to let the driver, Dillon Webb, be on his way if he removed the word "ASS" from the decal. Webb refused, (correctly) asserting his First Amendment right to publicize his non-driving activities. English's report is full of dumb things (and, ironically, some incorrect English). Here's what he had to say about the stop and the driver's assertion of his Constitutional rights. (All errors in the original.)
|
![]() |
by Mike Masnick on (#5QFGB)
There were a bunch of headlines this weekend claiming that Donald Trump had just "sued" Twitter to get his account reinstated. This is untrue. There were also some articles suggesting that he was using Florida's new social media law as the basis of this lawsuit. This is also false (what the hell is wrong with reporters these days?).Trump actually sued back in July and it was widely covered then. And the basis of that lawsuit was not Florida's law, but rather a bizarrely twisted interpretation of the 1st Amendment.What happened on Friday was that in that ongoing case, Trump filed for a preliminary injunction that would (if granted) force Twitter to reinstate Trump's account. This is not at all likely to be granted. The motion for the injunction is laughably bad. It's even worse than the initial complaint (which was hilariously bad). It does make reference to Florida's law -- which has already been held to be unconstitutional -- but it's certainly not using that as a key part of its argument.As for this motion, it's just a lawyerly Hail Mary attempt by lawyers who are in way too deep, hoping that maybe they'll get lucky with a judge who doesn't care about how the 1st Amendment actually works. It's a mishmash of confused and debunked legal theories about the 1st Amendment and Section 230, but the crux of it is that it violated then President Donald Trump's rights when Twitter shut down his account, because Twitter was acting as the government. Yes. The argument is so stupid as to need repeating. The underlying argument is that government actor, Twitter illegally censored private citizen President Donald Trump, taking away his 1st Amendment rights through prior restraint.
|
![]() |
by Tim Cushing on (#5QFDN)
The CPB continues to increase the number of electronic devices (at least temporarily) seized and searched at border crossings and international airports. Basic searches -- ones that don't involve any additional tech or software -- can be performed for almost any reason. For deeper searches, the CBP needs only a little bit more: articulable suspicion.Even though it's only a very small percentage of the total, it continues to increase, both in total numbers and as a percentage of the whole.
|
![]() |
by Daily Deal on (#5QFDP)
Amazon Web Services (AWS) has forever changed the way businesses operate. Enterprises, big or small, look to the AWS platform for their cloud and data storage needs. And as the demand for AWS rises, so as the demand for competent AWS professionals. This Premier All AWS Certification Training Bundle gives you lifetime access to 7 prep courses to prepare you for the essential AWS certifications: Certified Cloud Practitioner, Solutions Architect, Developer Associate, SysOps Administrator, and more. These courses cover the skillset required and give a simulation of the actual exams to prepare and help you pass and get certified in no time. The bundle is on sale for $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Leigh Beadon on (#5QFB3)
You may remember that, a couple years ago, our line of Copying Is Not Theft t-shirts and other gear was suddenly taken down by Teespring (now just called Spring) — first based on the completely false assertion that it contained third-party content that we didn't have the rights to use, then (after a very unhelpful discussion with their IP Escalations department) because it apparently violated some other policy that they refused to specify. That prompted us to open a new Techdirt Gear store on Threadless, where we've launched many of our old designs and all our new ones since the takedown. But we also kept the Spring store active for people who preferred it and for some old designs that we hadn't yet moved — and a few weeks ago the site's takedown regime struck again, wiping out our line of Copymouse gear that had lived there for nearly five years. So, once again, we've relaunched the design over on Threadless:Of course, this takedown is a little different from the previous one. The Copying Is Not Theft gear contains no third-party material whatsoever, and there was simply no legitimate reason for Spring to have removed it — and they refused to even offer any explanation of what they thought that reason might be. In the case of Copymouse, it's obvious that it makes use of a particular logo, though in an obviously transformative manner for the purpose of commentary. So, yes, there is an argument for taking it down. It's just not a strong argument, since the design clearly falls within the bounds of fair use for the purposes of criticism and commentary, and it's hard to argue that there's any likelihood of confusion for consumers: nobody is going to think it's a piece of official Disney merchandise. Nevertheless, it's at least somewhat understandable that it caught the attention of either an automatic filter or a manual reviewer, and given the increased scrutiny and attempts to create third-party liability falling upon services that create products with user-uploaded artwork, it's no real surprise that a massive site like Spring errs on the side of caution (indeed, we won't be too surprised if the design ends up being removed from Threadless as well). It's still disappointing though, and even more importantly, it's yet another example of why third-party liability protections are so very, very important, and how when those protections are not strong, sites tend towards overblocking clearly legitimate works.But for now, at least, you can still get your Copymouse gear on Threadless while we all wait to see if history repeats itself and the design needs an update in 2023.
|
![]() |
by Mike Masnick on (#5QF7Y)
It remains perplexing to me that so many people -- especially among the Trumpist world -- seem to believe that removing Section 230 will somehow make websites more likely to host their incendiary speech. We've explained before why the opposite is true -- adding more liability for user speech means a lot fewer sites will allow user speech. But now we have a real world example to show this.Last month, in a truly bizarre ruling, the Australian High Court said that news publishers should be liable for comments on social media on their own posts to those social media platforms. In other words, if a news organization published a story about, say, a politician, and then linked to that story on Facebook, if a random user defamed the politician in the comments on Facebook... then the original publisher could face liability for those comments.It didn't take long for Rupert Murdoch (who has been pushing to end Section 230 in the US) to start screaming about how he and other media publishers now need special intermediary protections in Australia. And he's not wrong (even if he is hypocritical). But, even more interesting is that CNN has announced that it will no longer publish news to Facebook in Australia in response to this law:
|
![]() |
by Karl Bode on (#5QEYR)
Another day, another massive privacy breach nobody will do much about. This time it's Neiman Marcus, which issued a statement indicating that the personal data of roughly 4.6 million U.S. consumers was exposed thanks to a previously undisclosed data breach that occurred last year. According to the company, the data exposed included login in information, credit card payment information, virtual gift card numbers, names, addresses, and the security questions attached to Neiman Marcus accounts. The company is, as they always are in the wake of such breaches, very, very sorry:
|
![]() |
by Tim Cushing on (#5QER0)
Not only is the government using "reverse warrants" to rummage around in your Google stuff, it's also using "keyword warrants" to cast about blindly for potential suspects.Reverse warrants (a.k.a. geofence warrants) allow the government (when allowed by courts) to work its way backwards from a bulk collection of data to potential suspects by gathering info on all phone users in the area of a suspected crime. The only probable cause supporting these searches is the pretty damn good probability Google (and others but mostly Google) have gathered location data that can be tied to phones. Once a plausible needle is pulled from the haystack, the cops go back to Google, demanding identifying data linked to the phone.This search method mirrors another method that's probably used far more often than it's been exposed. As Thomas Brewster reports for Forbes, an accidentally unsealed warrant shows investigators are seeking bulk info on Google users using nothing more than search terms they think might be related to criminal acts.
|
![]() |
by Timothy Geigner on (#5QEDK)
It's no secret that Amazon-owned Twitch has had a rough go of it for the past year or so. We've talked about most, if not all, of the issues the platform has created for itself: a DMCA apocalypse, a creative community angry about not being informed over copyright issues, unclear creator guidelines for content that result in punishment from Twitch while some creators happily test the fences on those guidelines, and further and ongoing communication breakdowns with creators. All of that, mind you, has taken place over the last 12 months. It's been bad. Really bad!But great news: now it's even worse! Someone managed to get into the Twitch platform and leak it. As in pretty much all of it. And even some information on a Steam-rival Amazon is planning to release. Seriously.
|
![]() |
by Copia Institute on (#5QE86)
Summary: In its 15 years as a micro-blogging service, Twitter has given users more characters per tweet, reaction GIFs, multiple UI options, and the occasional random resorting of their timelines.The most recent offering was to give users the option to create posts designed to be swept away by the digital sands of time. Early in 2020, Twitter announced it would be rolling out "Fleets" — self-deleting tweets with a lifespan of only 24 hours. This put Twitter on equal footing with Instagram's "Stories" feature, which allows users to post content with a built-in expiration date.In the initial, limited rollout of Fleets, Twitter reported that the feature showed advantages over the platform's standard offering. Twitter Comms tweeted that initial testing looked promising, stating that it was seeing "less abuse with Fleets" with only a "small percentage" of Fleets being reported each day.Whether this early indicator was a symptom of the limited rollout or users viewing self-deleting abuse as a problem that solves itself, the wider rollout wasn't nearly as easy as earlier indicators nor was it relatively abuse free. Fleet’s full debut arrived in the wake of an incredibly contentious U.S. presidential election — one marred by election interference accusations and a constant barrage of misinformation. The full rollout also came after nearly a year of a worldwide pandemic, which resulted in a constant flow of misinformation across multiple social media platforms globally.While amplification of misinformation contained in Fleets was somewhat tempered by their innate ephemerality, as well as very limited interaction options, it seemed unclear how — or how well — Twitter was handling moderating misinformation spread by the new communication option. Extremism researcher Marc-Andre Argentino was able to send out a series of "fleets" containing misinformation and banned URLS, noting that Twitter only flagged one that asserted a link between the virus and cell phone towers.Samantha Cole reported other Fleet moderation issues. Writing for Motherboard, Cole noted that apparent glitches were allowing users to see Fleets from people they had blocked, as well as Fleets from people who had blocked them. Failing to maintain settings that users set up to block or mute others created more avenues for abuse. Cole also pointed out that users weren't being notified when their tweets were added to Fleets, providing abusive users with another option to harass while the targets of abuse remain unaware.Company Considerations:
|
![]() |
by Karl Bode on (#5QE68)
We've noted for a while there's a weird myopia occurring in internet policy. As in, "big tech" (namely Facebook, Google, and Amazon) get a relentless amount of Congressional and policy wonk attention for their various, and sometimes painfully idiotic behaviors. At the same time, just an adorable smattering of serious policy attention is being given to a wide array of equally problematic but clearly monopolized industries (banking, airlines, insurance, energy), or internet-connected sectors that engage in many of the same (or sometimes worse) behaviors, be they adtech or U.S. telecom.Case in point: while the entirety of U.S. policy experts, lawmakers, journalists, and academics (justifiably) fixated on the Facebook whistleblower train wreck, a story popped up about AT&T. Basically, it showcased how AT&T not only provided the lion's share of funding for the propaganda-laden OAN cable TV "news" network, the entire thing was AT&T's idea in the first place, and simply wouldn't exist without AT&T's consistent support:
|
![]() |
by Karl Bode on (#5QE40)
We've noted for a while there's a weird myopia occurring in internet policy. As in, "big tech" (namely Facebook, Google, and Amazon) get a relentless amount of Congressional and policy wonk attention for their various, and sometimes painfully idiotic behaviors. At the same time, just an adorable smattering of serious policy attention is being given to a wide array of equally problematic but clearly monopolized industries (banking, airlines, insurance, energy), or internet-connected sectors that engage in many of the same (or sometimes worse) behaviors, be they adtech or U.S. telecom.Case in point: while the entirety of U.S. policy experts, lawmakers, journalists, and academics (justifiably) fixated on the Facebook whistleblower train wreck, a story popped up about AT&T. Basically, it showcased how AT&T not only provided the lion's share of funding for the propaganda-laden OAN cable TV "news" network, the entire thing was AT&T's idea in the first place, and simply wouldn't exist without AT&T's consistent support:
|
![]() |
by Mike Masnick on (#5QE13)
We've been running our Greenhouse discussion on content moderation at the infrastructure level for a bit now, and normally all of the posts for these discussions come from expert guest commentators. However, I'm going to add my voice to the collection here because there's one topic that I haven't seen covered, and which is important, because it comes up whenever I'm talking to people about content moderation at the infrastructure level: do we need a new taxonomy for internet infrastructure to better have this discussion?The thinking here is that the traditional OSI model of the internet layers is somewhat outdated and not particularly relevant to discussions such as this one. Also, it's hellishly confusing as is easily demonstrated by this fun Google box of "people also ask" on a search on "internet layers."Clearly, lots of people are confused.Even just thinking about what counts as infrastructure can be confusing. One of my regular examples is Zoom, the video conferencing app that has become standard and required during the COVID pandemic: is that infrastructure? Is that edge? It has elements of both.But the underlying concern in this entire discussion is that most of the debate around content moderation is about clear edge providers: the services that definitely touch the end users: Facebook, Twitter, YouTube, etc. And, as I noted in my opening piece, there is a real concern that because the debate focuses on those companies, and there appears to be tremendous appetite for policy making and regulating those edge providers, that any new regulations may not realize how they will also impact infrastructure providers, where the impact could be much more seismic.Given all that, many people have suggested that a "new taxonomy" might be useful, to help "carve out" infrastructure services from any new regulations regarding moderation. It's not hard to understand a concept like "maybe this rule should apply to social media sites, but not to domain registrars" for example.However, the dangers in building up such a taxonomy greatly outweigh any such benefits. First, as noted earlier, any new taxonomy is going to be fraught with difficult questions. It's not always clear what really is infrastructure these days. We've already discussed how financial intermediaries are, in effect infrastructure for the internet these days -- and that's a very different participant than the traditional OSI model of internet layers. Same with advertising firms. And I've already mentioned Zoom as a company that clearly has an edge component, but feels more like it should be considered infrastructure. Part of that is just the nature of how the internet works, in which some of the layers are merged. Marc Andreessen famously noted that software eats the world, but the internet itself is subsuming more and more traditional infrastructure as well -- and that creates complications.On top of that, this is an extremely dynamic world. Part of the reason why the OSI model feels obsolete is because it is. Things change, and they can change fairly rapidly on the internet. So any taxonomy might be obsolete by the time it's created, and that's extremely dangerous if the plan is to use it for classifying services for the purpose of regulation.The final concern with such a taxonomy is simply that it seems likely to encourage regulatory approaches in places where it's not clear if it's actually needed. If the intent of such a taxonomy is to help lawmakers write a law that only puts its focus on the edge players, that's unlikely how it will remain. Once such a mapping is in place, the temptation (instead) will simply be to create new rules for each layer of the new stack.A new taxonomy may sound good as a first pass, but it will inevitably create more problems than it solves.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
|
![]() |
by Tim Cushing on (#5QDYN)
The new hotness for law enforcement isn't all that new. But it is still very hot, a better way to amass a list of suspects when you don't have any particular suspect in mind. Aiding and abetting in the new bulk collection is Google, which has a collection of location info plenty of law enforcement agencies find useful.There's very little governing this collection or its access by government agencies. Most seem to be relying on the Third Party Doctrine to save their searches, which may use warrants but do not use probable cause beyond the probability that Google houses the location data they're seeking.Law enforcement agencies at both the local and federal levels have availed themselves of this data, using "geofences" to contain the location data sought by so-called "reverse warrants." Once they have the data points, investigators try to determine who the most likely suspect(s) is. That becomes a bigger problem when the area contained in the geofence contains hundreds or thousands of people who did not commit the crime being investigated.These warrants have been used to seek suspects in incidents ranging from arson to... um... protesting police violence. They've also been used to track down suspects alleged to have raided the US Capitol building on January 6, 2021 -- the day some Trump supporters decided (with the support of several prominent Republicans, including the recently de-elected president) that they could change the outcome of a national election if they committed a bunch of federal crimes.Plenty of those suspects outed themselves on social media. For everyone else, there's reverse warrants, as reported by Wired. (h/t Michael Vario)
|
![]() |
by Daily Deal on (#5QDYP)
The All-in-One Microsoft, Cybersecurity, and Python Exam Prep Training Bundle has 6 courses to help you learn the skills you need to succeed as a tech professional. The courses cover Python 3, software development, ITIL, cybersecurity, and GDPR compliance. Exams covered include: MTA 98-381, MTA 98-361, the ITIL Foundation v4 exam, PCEP Certified Entry-Level Python Programmer Certification Exam, CompTIA CySA+ Certification Exam, and GDPR CIPP/E Certification Exam. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5QDRZ)
Here we go again. Yesterday, the Facebook whistleblower, Frances Haugen, testified before the Senate Commerce Committee. Frankly, she came across as pretty credible and thoughtful, even if I completely disagree with some of her suggestions. I think she's correct about some of the problems she witnessed, and the misalignment of incentives facing Facebook's senior management. However, her understanding of the possible approaches to deal with it is, unfortunately, a mixed bag.Of course, for the Senators in the hearing, it became the expected exercise in confirmation bias, in which they each insisted that their plan to fix the internet would solve the problems Haugen detailed. And, not surprisingly, many of them insisted that Section 230 was the issue, and that if you magically changed 230 and made companies more liable, they'd somehow be better. Leaving aside that there is zero evidence to support this (and plenty of evidence to suggest the opposite is true), the most telling bit in all of this is that if you think changing Section 230 is the answer Facebook agrees with you. It's exactly what Facebook wants. See the smarmy, tone-deaf, self-serving statement the company put out in response to the hearing:
|
![]() |
by Karl Bode on (#5QDFR)
So for years we've talked about the growing threat of SIM hijacking, which involves an attacker covertly porting out your phone number from right underneath your nose (sometimes with the help of bribed or conned wireless carrier employees). Once they have your phone identity, they have access to most of your personal accounts secured by two-factor SMS authentication, opening the door to the theft of social media accounts or the draining of your cryptocurrency account. If you're really unlucky, the hackers will harrass the hell out of you in a bid to extort you even further.It's a huge mess, and the both the criminal complaints -- and lawsuits against wireless carriers for not doing more to protect their users -- have been piling up for several years. For several years, Senators like Ron Wyden have been sending letters to the FCC asking the nation's top telecom regulator to, you know, do something. After years of inaction the agency appears to have gotten the message, announcing a new plan to at least consider some new rules to make SIM hijacking more difficult.Most of the proposal involves nudging wireless carriers to do things they should have done long ago. Such as updating FCC Customer Proprietary Network Information (CPNI) and Local Number Portability rules to require wireless carriers adopt secure methods of confirming the customer’s identity before porting out a customer’s phone number to a new device or carrier (duh). As well as requiring that wireless carriers immediately notify you when somebody tries to port out your phone number without your permission (double duh):
|
![]() |
by Tim Cushing on (#5QD7S)
Earlier this year, a data retention law passed by the Belgian government was overturned by the country's Constitutional Court. The law mandated retention of metadata on all calls and texts by residents for one year, just in case the government ever decided it wanted access to it. Acting on guidance from the EU Court on laws mandating indiscriminate data retention elsewhere in the Union, the Constitutional Court struck the law down, finding it was neither justified nor legal under CJEU precedent or under Belgium's own Constitution.
|
![]() |
by Timothy Geigner on (#5QCY3)
For over a year now, we have discussed Facebook's decision to require users of Oculus VR headsets to have active Facebook accounts linked to the devices in order for them to work properly. This decision came to be despite all the noise made by Oculus in 2014, when Facebook acquired the VR company, insisting that this very specific thing would not occur. Karl Bode, at the time, pointed out a number of potential issues this plan could cause, noting specifically that users could find their Oculus hardware broken for reasons not of their own making.
|
![]() |
by Tim Cushing on (#5QCTZ)
More cities are adopting an approach to mental health emergency calls that steers calls away from police officers and towards professionals who are trained to respond to mental health crises with something other than force deployment.Early results have shown promise in cities like Denver, Colorado and New York City, New York. These response teams are not only better suited to handling mental health calls, but they're less expensive than sending cops and/or needlessly involving the carceral system. Law enforcement agencies command outsized portions of city budgets. Shifting small portions of these budgets to alternatives like these makes better use of these funds, providing residents with options that are far more effective -- and cost effective -- than the usual method of sending more expensive government employees to respond to problems they're ill-equipped to handle.A couple of cities in California are experimenting with mental health response teams. The teams in use in Sacramento and Oakland were formed by residents in response to the tragic killing of a young man suffering from schizoaffective disorder by police officers.
|
![]() |
by Mike Masnick on (#5QCSE)
As you know by now, much of the tech news cycle yesterday was dominated by the fact that Facebook appeared to erase itself from the internet via a botched BGP configuration. Hilarity ensued -- including my favorite bit about how Facebook's office badges weren't working because they relied on connecting to a Facebook server that could no longer be found (also, how in borking their own BGP, Facebook basically knocked out their own ability to fix it until they could get the right people who knew what to do to have physical access to the routers).But in talking to people who were upset about being cut off from Facebook, Instagram, WhatsApp, or Facebook Messenger, it was a good point to remind people that another benefit of a protocols, not platforms approach to these things is that it's way more resilient. If you're using Messenger and it's down, but can easily swap in a different tool and continue to communicate that's a much better, more resilient solution than relying on Facebook not to mess up. And that's on top of all the other benefits I laid out in my paper.In fact, a protocols approach also creates more incentives for better uptime from services, since continually screwing up for extended periods of times doesn't just mean losing ad revenue for a few hours, but it is much more likely to lead people to permanently switch to an alternative provider.Indeed, a key part of the value of the internet, originally, was in its resiliency of being highly distributed, rather than centralized, and how it could continue to work well if one part fell off the network. The increasing centralization/silo-ization of the internet has taken away much of that benefit. So, if anything, yesterday's mess should be seen as another reason to look more closely at a protocols-based approach to building new internet services.
|
![]() |
by Emma Llanso on (#5QCN5)
In August, OnlyFans made the stunning announcement that it planned to ban sexually explicit content from its service. The site, which allows creators to post exclusive content and interact directly with subscribers, made its name as a host for sexually-oriented content. For a profitable website to announce a ban of the very content that helped establish it was surprising and dismaying to the sex workers and other creators who make a living on the site.OnlyFans is hardly the first site to face financial pressure related to the content it publishes. Advertiser pressure has been a hallmark of the publishing industry, whether in shaping what news is reported and published, or withdrawing support when a television series breaks new societal ground.Publishers across different kinds of media have historically been vulnerable to the demands of their financial supporters when it comes to restricting the kinds of media they distribute. And, with online advertising now accounting for the majority of total advertising spending in the U.S., we have seen advertisers recognize their power to influence how major social media sites moderate, the organization of campaigns like Stop Hate for Profit, or the development of “brand safety” standards for acceptable content.But OnlyFans wasn’t bowing to advertiser demands; instead, it says it faced an even more fundamental kind of pressure coming from its financial intermediaries. OnlyFans explained in a statement that it planned to ban explicit content “to comply with the requests of our banking partners and payout providers.”Financial intermediaries are key actors in the online content hosting ecosystem. The websites and apps that host people’s speech depend on banks, credit card companies, and payment processors to do everything from buying domain names and renting server space to paying their engineers and content moderators. Financial intermediaries are also essential for receiving payments from advertisers and ad networks, processing purchases, and enabling user subscriptions. Losing access to a bank account, or getting dropped by a payment processor, can make it impossible for a site to make money or pay its debts, and can result in the site getting knocked offline completely.This makes financial intermediaries obvious leverage points for censorship, including through government pressure. Government officials may target financial intermediaries with threats of legal action or reputational harm, as a way of pursuing censorship of speech that they cannot actually punish under the law.In 2010, for example, U.S Congressmen Joe Lieberman and Peter King reportedly pressured MasterCard in private to stop processing payments for Wikileaks; this came alongside a very public campaign of censure that Lieberman was conducting against the site. Ultimately, Wikileaks lost its access to so many banks, credit card companies, and payment processors that it had to temporarily suspend its operations; it now accepts donations through various cryptocurrencies or via donations made to the Wau Holland Foundation (which has led to pressure on the Foundation in turn).Credit card companies were also the target of the 2015 campaign by Sheriff Tom Dart to shutter Backpage.com. Dart had previously pursued charges against another classified-ads site, Craigslist, for solicitation of prostitution, based on the content of some ads posted by users, and had been told unequivocally by a district court that Section 230 barred such a prosecution.In pursuing Backpage for similar concerns about enabling prostitution, Dart took a different tack: He sent letters to Visa and MasterCard demanding that they “cease and desist” their business relationships with Backpage, implying that the companies could face civil and criminal charges. Dart also threatened to hold a damning press conference if the credit card companies did not sever their ties with the website.The credit card companies complied, and terminated services to Backpage. Backpage challenged Dart’s acts as unconstitutional government coercion and censorship in violation of the First Amendment. (CDT, EFF, and the Association for Alternative Newsmedia filed an amicus brief in support of Backpage’s First Amendment arguments in that case.) The Seventh Circuit agreed and ordered Dart to cease his unconstitutional pressure campaign.But this did not result in a return to the status quo, as the credit card companies declined to restore service to Backpage, showing how long-lasting the effects of such pressure can be. Backpage is now offline—but not because of Dart—the federal government seized the site as part of its prosecution of several Backpage executives, which was declared a mistrial earlier this month.Since that time, the pressures on payment processors and other financial intermediaries have only increased. FOSTA-SESTA, for example, created a vague new federal crime of “facilitation of prostitution” that has rendered many intermediaries uncertain about whether they face legal risk in association with content related to sex work. After Congress passed FOSTA in 2018, Reddit and Craigslist shuttered portions of their sites, multiple sites devoted to harm reduction went offline, and sites like Instagram, Patreon, Tumblr, and Twitch have taken increasingly strict stances against nudity and sexual content.So while advertisers may be largely motivated by commercial concerns and brand reputation, financial intermediaries such as banks and payment processors are also driven by concerns over legal risk when they try to limit what kinds of speech and speakers are accessible online.Financial institutions, in general, are highly regulated. Banks, for example, face obligations such as the “Customer Due Diligence” rule in the US, which requires them to verify the identity of account holders and develop a risk profile of their business. Concerns over legal risk can cause financial intermediaries to employ ham-handed automated screening techniques that lead to absurd outcomes, such as when Paypal canceled the account of News Media Canada in 2017 for promoting the story “Syrian Family Adopts To New Life”, or when Venmo (which is owned by PayPal) reportedly blocked donations to the Palestine Children’s Relief Fund in May 2021.As pressures relating to online content and UGC-related businesses grow, some financial intermediaries are taking a more systemic approach to evaluating the risk that certain kinds of content pose to their own businesses. In this, financial intermediaries are mirroring a trend seen in content regulation debates more generally, on both sides of the Atlantic.MasterCard, for example, in April announced changes to its policy for processing payments related to adult entertainment. Starting October 15, MasterCard will require that banks connecting merchants to the MasterCard network certify that those merchants have processes in place to maintain age and consent documentation for the participants in sexually explicit content, along with specific “content control measures.”These include pre-publication review of content and a complaint procedure that can address reports of illegal or nonconsensual content within seven days, including a process by which people depicted in the content can request its removal (which MasterCard confusingly calls an “appeals” process). In other words, MasterCard is using its position as the second largest credit card network in the US to require banks to vet website operators’ content moderation processes—and potentially re-shaping the online adult content industry at the same time.Financial intermediaries are integral to online content creation and hosting, and their actions to censor specific content or enact PACT Act-style systemic oversight of content moderation processes should bring greater scrutiny on their role in the online speech ecosystem.As discussed above, these intermediaries are an attractive target for government actors seeking to censor surreptitiously and extralegally, and they may feel compelled to act cautiously if their legal obligations and potential liability are not clear. (For the history of this issue in the copyright and trademark field, see Annemarie Bridy’s 2015 article, Internet Payment Blockades.) Moreover, financial intermediaries are often several steps removed from the speech at issue and may not have a direct relationship with the speaker, which can make them even less likely to defend users’ speech interests when faced with legal or reputational risk.As is the case throughout the stack, we need more information from financial intermediaries about how they are exercising discretion over others’ speech. CDT joined EFF and twenty other human rights organizations in a recent letter to PayPal and Venmo, calling on those payment processors to publish regular transparency reports that disclose government demands for user data and account closures, as well as the companies’ own Terms of Service enforcement actions against account holders.Account holders also need to receive meaningful notice when their accounts are closed and provided the opportunity to appeal those decisions—something notably missing from MasterCard’s guidelines for what banks should require of website operators.Ultimately, OnlyFans reversed course on its porn ban and announced that they had “secured assurances necessary to support [their] diverse creator community” (It’s not clear if those assurances came from existing payment processors or if OnlyFans has found new financial intermediaries). But as payment processors, banks, and credit card companies continue to confront questions about their role in enabling access to speech online, they should learn from other intermediaries’ experience: once an intermediary starts making judgments about what lawful speech it will and won’t support, the demands on it to exercise that judgment only increase, and the scale of human behavior and expression enabled by the Internet is unimaginably huge. The ratchet of content moderation expectations only turns one way.Emma Llansó is the Director of CDT's Free Expression Project, where she works to promote law and policy that support Internet users' free expression rights in the United States, Europe, and around the world.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
|
![]() |
by Leigh Beadon on (#5QCJE)
Last week, we celebrated 300 episodes of the Techdirt Podcast with a live stream, for which we brought back original co-hosts Dennis Yang and Hersh Reddy. You can watch the stream on YouTube, but now it's time to release the episode as normal! The subject was simple, but led the conversation in all kinds of interesting directions: how have our views on technology issues changed and evolved since the podcast started?Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Tim Cushing on (#5QCG4)
A couple of years ago, documents surfaced that showed the CBP was placing journalists, activists, and immigration lawyers on some form of a watchlist, which would allow agents and officers to subject these targets to additional scrutiny when they crossed the border. There were obvious civil liberties implications, ones the CBP seemed largely unconcerned about.The targeting appeared to be related to the "migrant caravan" that reached the border late in 2018 and performed a mass "incursion" on January 1, 2019. The CBP claimed it had only targeted those people because they had been involved in "violence" near the border late in 2018. It refused to explain what it meant by the word "involved" or how that was enough to ignore First Amendment protections. Nor did it explain why it was deliberately targeting US citizens not suspected to have been involved in any criminal activity. It also did not explain why it shared information on these targets with the government of Mexico, which then assisted in spying on this group of lawyers, journalists, and activists.The DHS Inspector General opened an investigation [PDF] of these actions. And it has arrived at the conclusion that this all looks pretty bad, but wasn't actually illegal. Read into that what you will.
|
![]() |
by Alex Feerst on (#5QCDZ)
In his post kicking off this series, Mike notes that, “the biggest concern with moving moderation decisions down the stack is that most infrastructure players only have a sledge hammer to deal with these questions, rather than a scalpel.” And, I agree with Jonathan Zittrain and other contributors that governments, activists, and others will increasingly reach down the stack to push for takedowns—and will probably get them.So, should we expect more blunt force infra layer takedowns or will infrastructure companies invest in more precise moderation tools? Which one is even worse?Given the choice to build infrastructure now, would you start with a scalpel? How about many scalpels? Or maybe something less severe but distributed and transparent, like clear plastic spoons everywhere! Will the moderation hurt less if we’re all in it together? With the distributed web, we may get to ask all these questions, and have a chance to make things better (or worse). How?Let me backup a moment for some mostly accurate natural history. In the 90s, to vastly oversimplify, there was web 1.0: static, server-side pages that arose, more manual than you'd like sometimes, maybe not so easy to search or monetize at scale, but fundamentally decentralized and open. We had webrings and manually curated search lists. Listening to Nirvana in my dorm room I read John Perry Barlow’s announcement that "We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different," in a green IRC window and believed.Ok, not every feature was that simple or open or decentralized. The specter of content moderation haunted the Internet from early days of email and bulletin boards. In 1978, a marketer for DEC sent out the first unsolicited commercial message on ARPANET and a few hundred people told him to knock it off, Gary! Voila, community moderation.Service providers like AOL and Prodigy offered portals through which users accessed the web and associated chat rooms, and the need to protect the brand led to predictable interventions. There's a Rosetta Stone of AOL content moderation guidelines from 1994 floating around to remind us that as long as there have been people expressing themselves online, there have been other people doing their best to create workable rule sets to govern that expression and endlessly failing in comic and tragic ways (“‘F--- you’ is vulgar” but ‘my *** hurts’ is ok”).Back in the Lascaux Cave there was probably someone identifying naughty animal parts and sneaking over with charcoal to darken them out, for the community, and storytellers who blamed all the community’s ills on that person.And then after the new millenium, little by little and then all at once, came Web 2.0—the Social Web. Javascript frameworks, personalization, everyone a creator and consumer within (not really that open) structures we now call "Platforms" (arguably even less open when using their proprietary mobile rather than web applications). It became much easier for anyone to create, connect, communicate, and distribute expression online without having to design or host their own pages. We got more efficient at tracking and ad targeting and using those algorithms to serve you things similar to the other things you liked.We all started saying a lot of stuff and never really stopped. If you're a fan of expression in general, and especially of people who previously didn't have great access to distribution channels expressing themselves more, that's a win. But let's be honest: 500 million tweets a day? We've been on an expression bender for years. And that means companies spending billions, and tens of thousands of enablers—paid and unpaid—supporting our speech bender. Are people happy with the moderation we're getting? Generally not. Try running a platform. The moderation is terrible and the portions are so large!Who’s asking for moderation? Virtually everyone in different ways. Governments want illegal content (CSAM, terrorist content) restricted on behalf of the people, and some also want harmful but legal content restricted in ways that are still unclear, also for the people. Many want harmful content restricted, which means different things depending on which people, which place, which culture, which content, which coffee roast you had this morning. Civil society groups generally want content restricted related to their areas of expertise and concern (except EFF, who will party like it's 1999 forever I hope).There are lots of types of expression where at least some people think moderation is appropriate, for different reasons; misinformation is different from doxxing is different from harassment is different from copyright infringement is different from spam. Often, the same team deals with election protection and kids eating Tide Pods (and does both surprisingly well, considering). There’s a lot to moderate and lots of mutually inconsistent demand to do it coming from every direction.Ok, so let’s make a better internet! Web 3 is happening and it is good. More specifically, as Chris Dixon recently put it, “We are now at the beginning of the Web 3 era, which combines the decentralized, community-governed ethos of Web 1 with the advanced, modern functionality of Web 2.” Don’t forget the blockchain. Assume that over the next few years, Web 3 infrastructure gets built out and flourishes—projects like Arweave, Filecoin, Polkadot, Sia, and Storj. And applications eventually proliferate; tools for expression, creativity, communication, all the things humans do online, all built in ways that embody the values of the DWeb.But wait, the social web experiment of the past 15 years led us to build multi-billion dollar institutions within companies aimed at mitigating harms (to individuals, groups, societies, cultural values) associated with online expression and conduct, and increasingly, complying with new regulations. Private courts. Private Supreme Courts. Teams for safeguarding public health and democratic elections. Tens of thousands poring over photos of nipples, asking, where do we draw the line? Are we going to do all that again? One tempting answer is, let’s not. Let’s fire all the moderators. What’s the worst that could happen?Another way of asking this question is -- what do we mean when we talk about “censorship resistant” distributed technologies? This has been an element of the DWeb since early days but it’s not very clear (to me at least) how resistant, which censorship, and in what ways.My hunch is that censorship resistance—in the purest sense of defaulting to immutable content with no possible later interventions affecting its availability—is probably not realistic in light of how people and governments currently respond to Web 2.0. The likely outcome is probably quick escalation to intense conflict with the majority of governments.And even for people who still favor a marketplace-of-ideas-grounded “rights” framework, I think they know better than to argue that the cure for CSAM is more speech. There will either have to be ways of intervening or the DWeb is going to be a bumpy ride. But “censorship resistant” in the sense of, “how do we build a system where it is not governments, or a small number of powerful, centralized companies that control the levers at the important choke points for expression?” Now we’re talking. Or as Paul Frazee from Beaker Brower and other distributed projects put it: “The question isn't ‘how do we make moderation impossible?’ The question is, how do we make moderation trustworthy?”So, when it comes to expression and by extension content moderation, how exactly are we going to do better? What could content moderation look like if done consistent with the spirit, principles, and architecture of Web 3? What principles can we look to as a guide?I think the broad principles will come as no surprise to anyone following this space over the past few years (and are not so different from those outlined in Corynne McSherry’s post). They include notice, transparency, due process, the availability of multiple venues for expression, and robust competition between options on many axes—including privacy and community norms, as well as the ability of users to structure their own experience as much as possible.Here are some recurring themes:
|
![]() |
by Daily Deal on (#5QCE0)
The JavaScript DOM Game Developer Bundle has 8 courses to help you master coding fundamentals. Courses cover JavaScript DOM, Coding, HTML 5 Canvas, and more. You'll learn how to create your own fun, interactive games. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5QC80)
I'm sure by now most of you have either seen or read about Facebook whistleblower Frances Haugen's appearance on 60 Minutes discussing in detail the many problems she saw within Facebook. I'm always a little skeptical about 60 Minutes these days, as the show has an unfortunately long history of misrepresenting things about the internet, and similarly a single person's claims about what's happening within a company are not always the most accurate. That said, what Haugen does have to say is still kind of eye opening, and certainly concerning.The key takeaway that many seem to be highlighting from the interview is Haugen noting that Facebook knows damn well that making the site better for users will make Facebook less money.
|
![]() |
by Karl Bode on (#5QC2H)
We've noted for a long time that the wireless industry is prone to being fairly lax on security and consumer privacy. One example is the recent rabbit hole of a scandal related to the industry's treatment of user location data, which carriers have long sold to a wide array of middlemen without much thought as to how this data could be (and routinely is) abused. Another example is the industry's refusal to address the longstanding flaws in Signaling System 7 (SS7, or Common Channel Signaling System 7 in the US), a series of protocols hackers can exploit to track user location, dodge encryption, and even record private conversations.Now this week, a wireless industry middleman that handles billions of texts every year has acknowledged its security isn't much to write home about either. A company by the name of Syniverse revealed that it was the target of a major attack in a September SEC filing, first noted by Motherboard. The filing reveals that an "individual or organization" gained unauthorized access to the company's databases "on several occasions." That in turn provided the intruder repeated access to the company's Electronic Data Transfer (EDT) environment compromising 235 of its corporate telecom clients.The scope of the potentially revealed data is, well, massive:
|
![]() |
by Tim Cushing on (#5QBRY)
Some more unsettling news about law enforcement's close relationship to (or at least professional tolerance of) far-right groups linked to the January 6th raid of the Capitol building has come to light, thanks to transparency activists Distributed Denial of Secrets.Email accounts linked to several key members of the Oath Keepers -- four of whom are currently facing charges for their participation in the attack on the Capitol -- have been hacked, exposing communications between the Oath Keepers and law enforcement officers seeking to join the group.
|