Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-19 19:16
Appeals Court Actually Explores 'Good Faith' Issue In A Section 230 Case (Spoiler Alert: It Still Protects Moderation Choices)
Over the last couple years of a near constant flow of mis- and disinformation all about Section 230 of the Communications Decency Act, one element that has popped up a lot (including in our comments) especially among angry Trumpists, is that because Section (c)(2)(A) of the law has a "good faith" qualifier, it means that websites that moderate need to show they did so with "good faith." Many seem to (falsely) assume that this is a big gotcha, and they can get past the 230 immunity barrier by litigating over whether or not a particular moderation choice was done in "good faith." However, as we've explained, only one small part of the law -- (c)(2)(A) mentions "good faith." It's this part:
Wireless Industry Eyes Nontransparent 'Trust Score' To Determine Who Can Market Via Text Message
Though text messaging is starting to look somewhat archaic in the WhatsApp era, it's still the most effective way for political campaigns and nonprofits to reach their target audience, in part because 90 percent of text messages are read within 3 minutes. But the collision between wanting to allow these organizations to market their candidates and campaigns -- and protecting consumers from an ever-steady array of scammers, spoofers, and text messaging spammers -- has proven to be a cumbersome dance of dysfunction.The latest chapter in this saga: wireless carriers say they're working on a new system that would give each organization looking to send text messages a shiny new trust score. So far wireless carriers aren't saying how this trust score would be determined, but those who don't rank highly enough on the scale won't be able to send text messages en masse. The system is being contemplated after the 2020 election saw no shortage of text messaging spam that wireless subscribers found it difficult -- if not impossible -- to properly opt out of.The Telephone Consumer Protection Act of 1991 is a dated piece of befuddling legislation that's been interpreted to mean that you can't send unsolicited text message spam en masse. But marketers and political campaigns have long wiggled around the restrictions via P2P text message efforts, which still let you send blanket text message campaigns -- just somewhat individually via pre-scripted templates. These efforts were ramped up by the Sanders campaign, and were even more heavily embraced by the Trump campaign.Wireless carriers want to make sure customers don't get annoyed and leave, but they also want to ensure they won't be held liable under the TCPA. At the same time, many political organizations are understandably a bit nervous about companies like AT&T determining who is or isn't trustworthy in a way that probably won't be transparent:
This Week In Techdirt History: March 21st - 27th
Five Years AgoThis week in 2016, the press was still pretending encryption contributed to the Paris attacks when there was another attack in Brussels and... politicians rushed to blame encryption without waiting for the evidence (which didn't come). Meanwhile, the DOJ was fighting Apple in court over encryption when a new flaw in iMessage encryption was discovered, leading the DOJ to ask for a postponement in the case — and this all raised some questions about apparent contradictions in the DOJ's various statements as well as statements by the FBI.Also, though it happened the previous Friday afternoon, this was the week that we covered Hulk Hogan winning his lawsuit against Gawker.Ten Years AgoThis week in 2011, a major loss for Righthaven set up the important precedent that copying an entire work can still be fair use. We were dismayed by the loophole-happy lawyers defending the government's domain seizures, and had a post about how copyright filters were presenting a serious challenge for DJ culture. Meanwhile, the New York Times was getting used to its new soft paywall, and it was a bit of a mess: columnists were telling readers how to get around it, while the paper was trying to shut down a Twitter account that aided people in doing so, and somehow convincing itself that most people would pay — all while we wondered what the DMCA anti-circumvention implications were.Fifteen Years AgoThis week in 2006, the Supreme Court was considering some important cases to do with what can be patented. Companies were rushing to build web-based word processors after Google's purchase of Writely, Microsoft was embarking on an attempt to compete with Craigslist, and credit agencies were fighting against any rules that would force them to protect people's privacy. One judge tossed out a bizarre lawsuit claiming open source software violates antitrust law, and another shut down the RIAA's dreams of randomly hunting through everyone's computers. Meanwhile, the FBI was still trying to figure out email.
Poof! Taylor Swift, Evermore Theme Park Lawsuits Dropped With No Money Exchanged
Well, that didn't last long. You will recall that in early February a Utah theme park called Evermore filed a very stupid trademark lawsuit against Taylor Swift. At supposed issue was Swift's new album, Evermore, and the associated merchandise for it. The theme park claimed that Swift's album was driving their search engine rankings down, that people would be confused thinking she was somehow connected to the theme park, and that the park also produces some music, putting them in the same competitive marketplace as the singer. Swift's team countersued, alleging that some of the park's actors would sing and perform copyrighted music, including Swift's. It was all, frankly, very dumb.But merely a month later, the dumbness is gone. Rolling Stone reports that both sides have dropped their lawsuits and reached an agreement, one which does not carry any monetary exchange.
Content Moderation Case Study: Facebook Removes A Picture Of A Famous Danish Mermaid Statue (2016)
Summary: For over a century, Edvard Eriksen’s bronze statue of The Little Mermaid becoming human has been installed on a rock along the water in Copenhagen, Denmark. The statue was designed to represent the Hans Christian Anderson fairy tale, and has become a tourist attraction and landmark.
Senator Elizabeth Warren Goes Over The Line; Threatens To Punish Amazon For 'Snotty Tweets'
It's no secret that Elizabeth Warren thinks the big internet companies should be broken up. She's made that argument emphatically over the years. I'm not exactly clear what breaking them up actually accomplishes beyond punishing the companies, but as a Senator, she can certainly make the arguments for why it makes sense, or pass laws that impact how antitrust works.However, what she cannot and should not do, is threaten to punish a company for its speech. And, yet, that's exactly what she did. Amazon tweeted at Warren after Warren said that Amazon exploits loopholes and tax havens, and that she was introducing a bill to make the company pay more taxes. In response, Amazon said in a short tweet thread:
How Mark Warner's 'SAFE TECH Act' Will Make Many People A Lot Less Safe
I've already explained how Senator Mark Warner's "SAFE TECH Act" is an attack on the open internet. However, it goes beyond that. Over at OneZero, Cathy Reisenwitz has written a compelling op-ed explaining how the SAFE TECH Act will actually make the internet a lot less safe for many people.In some ways, her argument builds on what we already know about the disastrous human impact of FOSTA -- the last attack on Section 230 that was sold to the public as a way to "protect women and children" online. In fact, the evidence now suggests that after FOSTA sex trafficking increased and made it that much more difficult for law enforcement to find and stop sex trafficking. Some in Congress are finally realizing that FOSTA was perhaps a mistake and would like to study the impact of it.One would hope that this is allowed to happen before Senators like Warner are allowed to ram through further changes that they don't seem to understand.As Reisenwitz writes, everything about the SAFE TECH Act would create more harm -- again with sex workers being put at significant risk.
Biden Administration Says There's Nothing Wrong With ICE Setting Up A Fake College To Dupe Foreign Students Out Of Their Money, Residency
In 2019, facts came to light showing ICE had set up an entire fake college in Michigan to "catch" foreign visitors in the act of COMPLYING WITH FEDERAL LAW by continuing to pursue advanced degrees. Student visas remain valid as long as foreign visitors continue their education. The dwindling supply of H-1B visas under Trump meant that staying on top of educational obligations was a priority for those already in the country.But instead of sitting back and seeing whether some H-1B visa holders violated their obligations, ICE set up an a fake college -- one with a campus and a Facebook page and personnel who gladly accepted $100 application fees from H-1B hopefuls. ICE even asked a private entity to step in and designate its faux college as fully accredited for H-1B applicants to sell the ruse.After the ruse served its purpose, ICE moved in. It managed to ensnare all of eight people who might be associated with defrauding foreign visitors. ICE apparently avoided looking too hard at itself and its personnel... which took cash from applicants in exchange for false promises about visa extensions. More than 150 duped students were arrested but only eight of those are actually facing criminal charges.ICE said it was the foreign students' own fault if they didn't recognize the carefully constructed ruse for what it was. It also said it was trying to protect people from fraud, even as it defrauded more than 600 students out of $100 application fees. Multiple lawsuits against ICE have been filed since this information became public. And, so far, two consecutive administrations have failed to talk federal courts into dismissals.We've had a change in regimes and DOJ figureheads, but the government continues to insist it has done nothing wrong. If anyone was expecting Biden to roll back all the anti-immigrant policies and programs instituted by former president Donald Trump, they need to brace themselves for a whole bunch of disappointment. The new, improved DOJ is still the same old DOJ. The government did nothing wrong, the DOJ continues to insist, despite many in the current administration claiming the previous presidency did a whole lot of wrong.
Daily Deal: The Learn to Code 2021 Bundle
The Learn to Code 2021 Bundle has 13 courses to help you kickstart your coding career. Courses cover Ruby on Rails, C++, Python, C#, JavaScript, and more. You'll also learn about data science and machine learning. The bundle is on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Privacy Laws Giving Big Internet Companies A Convenient Excuse To Avoid Academic Scrutiny
For years we've talked about how the fact that no one really understands privacy, leads to very bad attempts at regulating privacy in ways that do more harm than good. They often don't do anything that actually protects privacy -- and instead screw up lots of other important things, from competition to free speech. In fact, in some ways, there's a big conflict between open internet systems and privacy. There are ways to get around that -- usually by moving the data from centralized silos out towards the ends of the network -- but that's rarely happening in practice. I mean, going back over thirteen years ago, we were writing about the inherent conflict between Facebook's (then) open social graph and privacy. Yet, at the time, Facebook was cheered on for opening up its social graph. It was creating a more "open" internet, an internet that others could build upon.But, of course, over the years things have changed. A lot. In 2018, after the Cambridge Analytica scandal, Mark Zuckerberg more or less admitted that the world was telling Facebook to lock everything down again:
Telecom Using Veterans As Props To Demonize California's New Net Neutrality Law
Efforts by industry and captured regulators to demonize California's net neutrality law have begun in earnest.Last week, AT&T lied that it had been forced to stop giving its customers "free data" nationwide because of the new law. Of course, that's not true. In reality, the law (slightly tougher than the FCC rules AT&T lobbied to kill) prevents AT&T from abusing its bullshit monthly usage caps. Under the law, AT&T can no longer abuse usage caps to give its own streaming services an unfair advantage over competitors like Netflix (which it had been doing for several years), nor can it let deep-pocketed companies buy an unfair advantage on AT&T's network (something AT&T called "sponsored data").Despite the industry's attempts to frame this so-called "zero rating" as akin to "free data," that's not accurate, and numerous experts say blocking such efforts is a good thing for consumers and competitors alike (for many reasons). And it's not that AT&T was forced to stop offering "free data," so much as the law stops AT&T from erecting artificial network limits, then exploiting those pointless restrictions to give itself (and deep-pocketed competitors) an unfair advantage in online competition.Because the broadband industry's gamesmanship with zero rating is hard for non-technical (or outright dumb) people to understand, it's easy to confuse folks. Enter FCC Commissioner Brendan Carr, who this week falsely tried to claim California's new net neutrality law would soon be "cutting off free health services" from veterans nationwide:
2 Years Later, Valve's Hands Off Approach To Adult Games Is Still Confusing, Still Very Much Not Hands Off
Back in 2018, after a year of truly hammering down on independent game studios producing what many would consider "adult" or "porn" games, Valve finally relented and said its Steam platform would be more open. As part of the announcement, Valve indicated it would take a hands off approach to game curation and allow more adult-style games generally, later clarifying that it intended to prevent only "troll" games. If all of that sounds incredibly vague and ripe for creating a massive and confusing mess, well, that's precisely what happened. Developers saw the chance that Steam would accept their games as a crapshoot, with some making it through and others not. The reasons for denials were equally vague and arbitrary.The dust has settled somewhat in the subsequent years, but the lack of clarity for developers in what is allowed or not continues to rear its ugly head. One recent case is with Super Seducer 3, a game that appears to now be fully denied from Steam despite the developer being way open to working with Steam on any perceived issues.
Data Broker Looking To Sell Real-Time Vehicle Location Data To Government Agencies, Including The Military
Location data is the new growth market. Data harvested from apps is sold to data brokers who, in turn, sell this to whoever's buying. Lately, the buyers have been a number of government agencies, including the CBP, ICE, DEA, Secret Service, IRS, and -- a bit more worryingly -- the Defense Department.The mileage varies for purchasers. The location data generally isn't as accurate as that obtained directly from service providers. On the other hand, putting a couple of middle men between the app data and the purchase of data helps agencies steer clear of Constitutional issues related to the Supreme Court's Carpenter decision, which introduced a warrant mandate for engaging in proxy tracking of people via cell service providers.But phones aren't the only objects that generate a wealth of location data. Cars go almost as many places as phones do, providing data brokers with yet another source of possibly useful location data that government agencies might be interested in obtaining access to. Here's Joseph Cox of Vice with more details:
Utah Governor Vetoes Ridiculous Unconstitutional Content Moderation Bill; Makes His Brother-in-Law Sad
Earlier this month, we noted that, to close out its session, the Utah legislature decided to pass two separate blatantly unconstitutional bills. One requiring porn filters on internet-connected devices, and the other that tried to overrule Section 230 (something states can't do) and require all "social media corporations" to employ an "independent review board" to review content moderation decisions. It also says that social media companies must moderate in an "equitable" manner (whatever that means).We went through all of the reasons why the bill was unconstitutional, as did others in Utah. In response, the bill's sponsor, Senator Michael McKell, gleefully told a local TV news station that he looked forward to wasting Utah taxpayers' hard earned money by defending it in court (he didn't say that it would be wasting the money -- that's just us noting that it would be throwing away their money since the law is so clearly unconstitutional).Thankfully, Utah Governor Spencer Cox (who happens to be Senator McKell's brother-in-law) has decided to veto the bill -- his very first veto (as we noted earlier, he chose to sign the other unconstitutional bill about porn filters).Oddly, Cox's office released two separate statements regarding the veto -- only one of which notes that the bill was likely unconstitutional, while the other one seems to act like the bill just needs a few technical tweaks. It's almost as if he's trying to have it both ways and address two different audiences with two very different statements. The official veto statement makes it clear that the bill has serious constitutional issues:
Recordings, Transcripts Show Police, Prosecutors Lied To A Grand Jury To Bring Gang Charges Against BLM Protesters
More information has come out about the disastrous attempt by Arizona prosecutors to turn anti-police-violence protesters into a street gang. Phoenix police officers waded into the protest comprised of (checks official documents) 17 protesters, showering them with pepper balls and arresting them all. Charges were brought, including one very damaging one: assisting a criminal street gang. Gang charges are automatic felonies with hefty sentence enhancements.According to the prosecutors handling the case, the use of the acronym ACAB (All Cops Are Bastards) by the protesters was indicative of their gang status. That and their use of umbrellas and black clothing. According to grand jury transcripts obtained by ABC15, a county prosecutor and Phoenix police Sgt. Doug McBride led jury members to believe "ACAB" was a street gang. Here's the prosecutor questioning McBride during a grand jury presentation:
Militias Still Recruiting On Facebook Demonstrates The Impossibility Of Content Moderation At Scale
Yesterday, in a (deliberately, I assume) well-timed release, the Tech Transparency Project released a report entitled Facebook's Militia Mess, detailing how there are tons of "militia groups" organizing on the platform (first found via a report on Buzzfeed). You may recall that, just days after the insurrection at the Capitol, that Facebook's COO Sheryl Sandberg made the extremely disingenuous claim that only Facebook had the smarts to stop these groups, and that most of the organizing of the Capitol insurrection must have happened elsewhere. Multiple reports debunked that claim, and this new one takes it even further, showing that these groups are (1) still organizing on Facebook, and (2) Facebook's recommendation algorithm is still pushing people to them:
Daily Deal: The Ultimate Beginner to Grandmaster Chess Course Bundle
The Ultimate Beginner to Grandmaster Chess Course Bundle has 137 hours of instruction on everything from basic chess theory to advanced strategies. You'll gain skills and knowledge from international and grand masters, learn about positional chess, learn valuable endgame ideas and principles, and much more. It's on sale for $90.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Congressional Panel On Internet And Disinformation... Includes Many Who Spread Disinformation Online
We've pointed out a few times how silly all these Congressional panels on content moderation are, but the one happening today is particularly silly. One of the problems, of course, is that while everyone seems to be mad about Section 230, they seem to be mad about it for opposite reasons, with Republicans wanting the companies to moderate less, and Democrats wanting the companies to moderate more. That's only one of many reasons why today's hearing, like those in the past, are so pointless. They tend to bog down in silly "but what about this particular moderation decision" which will then be presented in a misleading or out of context fashion, allowing the elected official to grandstand about how they "held big tech's feet to the fire" or some such nonsense.However, Cat Zakrzewski, over at the Washington Post has highlighted yet another reason why this particular "investigation" into disinformation online is so disingenuous: a bunch of the Republicans on the panel, exploring how these sites deal with mis- and disinformation -- are guilty of spreading disinformation themselves online.
Utah Governor Signs New Porn Filter Law That's Just Pointless, Performative Nonsense
For decades now Utah legislators have repeatedly engaged in theater in their doomed bid to filter pornography from the internet. And repeatedly those lawmakers run face first into the technical impossibility of such a feat (it's trivial for anybody who wants porn to bypass filters), the problematic collateral damage that inevitably occurs when you try to censor such content (filters almost always wind up with legit content being banned), and a pesky little thing known as the First Amendment. But annoying things like technical specifics or the Constitution aren't going to thwart people who just know better.For months now Utah has been contemplating yet another porn filtering law, this time HB 72. HB 72 pretends that it's going to purge the internet of its naughty bits by mandating active adult content filters on all smartphones and tablets sold in Utah. Phone makers would enable filters by default (purportedly because enabling such restrictions by choice is just to darn difficult), and require that mobile consumers in Utah enter a pass code before disabling the filters. If these filters aren't enabled by default, the bill would hold device manufacturers liable, up to $10 per individual violation.On Tuesday, Utah Governor Spencer Cox signed the bill into law, claiming its passage would send an “important message” about preventing children from accessing explicit online content:
City Of London Police Parrot Academic Publishers' Line That People Visiting Sci-Hub Should Be Afraid, Very Afraid
Techdirt has been following the saga of the City of London Police's special "Intellectual Property Crime Unit" (PIPCU) since it was formed back in 2013. It has not been an uplifting story. PIPCU seems to regard itself as Hollywood's private police force worldwide, trying to stop copyright infringement online, but without much understanding of how the Internet works, or even regard for the law, as a post back in 2014 detailed. PIPCU rather dropped off the radar, until last week, when its dire warnings about a new, deadly threat to the wondrous world of copyright were picked up by a number of gullible journalists. PIPCU's breathless press release reveals the shocking truth: innocent young minds are being encouraged to access knowledge, funded by the public, as widely as possible. Yes, PIPCU has discovered Sci-Hub:
NFL's Thursday Night Football Goes Exclusive To Amazon Prime Video
While denialism over cord-cutting is still somewhat a thing, a vastly larger segment of the public can finally see the writing on the wall. While the cable industry's first brave tactic in dealing with the cord-cutting issue was to boldly pretend as though it didn't exist, industry executives more recently realize that there is a bloodbath coming its way. There are few roadblocks that remain for a full on tsunami of cord-cutters and one of the most significant of those is still live sports broadcasting. This, of course, is something I've been screaming about on this site for years: the moment that people don't need to rely on cable television to follow their favorite sports teams live, cable will lose an insane number of subscribers.Over the past few years, the major American sports leagues have certainly inched in that direction. Notable for this post, 2017 saw the NFL ink a new streaming deal for mobile streaming with Verizon. The NFL had a long partnership with Verizon for mobile streaming already, but the notable aspect of the new deal was that NFL game streaming was suddenly not exclusive. Other streaming services could get in the game. And, while you can't draw a direct line to it, the tangential story of how the NFL just inked an exclusive deal with Amazon Prime for the broadcast rights for Thursday Night Football certainly shows you where this is all heading.
Content Moderation Case Study: Huge Surge In Users On One Server Prompts Intercession From Discord (2021)
Summary: A wild few days for the stock market resulted in some interesting moderation moves by a handful of communications/social media platforms.A group of unassociated retail investors (i.e. day traders playing the stock market with the assistance of services like Robin Hood) gathering at the Wall Street Bets subreddit started a mini-revolution by refusing to believe Gamestop stock was worth as little as some hedge funds believed it was.The initial surge in Gamestop's stock price was soon followed by a runaway escalation, some of it a direct response to a hedge fund's large (and exposed) short position. Melvin Capital -- the hedge fund targeted by Wall Street Bets denizens -- had announced its belief Gamestop stock wasn't worth the price it was at and had put its money where its mouth was by taking a large short position that would only pay off if the stock price continued to drop.As the stock soared from less than $5/share to over $150/share, people began flooding to r/wallstreetbets. This forced the first moderation move. Moderators briefly took the subreddit private in an attempt to stem the flow of newcomers and get a handle on the issues these sort of influxes bring with them.Wall Street Bets moved some of the conversation over to Discord, which prompted another set of moderation moves. Discord banned the server, claiming users routinely violated guidelines on hate speech, incitement of violence, and spreading misinformation. This was initially viewed as another attempt to rein in vengeful retail investors who were inflicting pain on hedge funds: the Big Guys making sure the Little Guys weren't allowed on the playing field. (Melvin Capital received a $2.75 billion cash infusion after its Gamestop short was blown up by Gamestop's unprecedented rise in price.)But it wasn't as conspiratorial as it first appeared. The users who frequented a subreddit that described itself as "4chan with a Bloomberg terminal" were very abrasive and the addition of mics to the mix at the Discord server made things worse by doubling the amount of noise -- noise that often included hate speech and plenty of insensitive language.The ban was dropped and the server was re-enabled by Discord, which announced it was stepping in to more directly moderate content and users. With over 300,000 users, the server had apparently grown too large, too quickly, making it all but impossible for Wall Street Bets moderators to handle on their own. This partially reversed the earlier narrative, turning Discord into the Big Guy helping out the Little Guy, rather than allowing them to be silenced permanently due to the actions of their worst users.Decisions to be made by Discord:
Drone Company Wants To Sell Cops A Drone That Can Break Windows, Negotiate With Criminals
A drone manufacturer really really wants cops to start inviting drones to their raiding parties. This will bring "+ whatever" to all raiding party stats, apparently. BRINC Drones is here to help... and welcomes users to question the life choices made by company execs that led to the implementation of this splash page:If these cops don't really look like cops to you, you're not alone. And by "you," I also mean BRINC Drones, which apparently wants to attract the warriors-in-a-war-zone mindset far too common in law enforcement. BRINC has a new drone -- one that presents itself as warlike as its target audience.Drones are definitely an integral part of the surveillance market. BRINC wants to make them an integral part of the "drug raids and standoffs with reluctant arrestees" market. Sure, anyone can smash a window. But how cool would it be if a drone could do it?
Beware Of Facebook CEOs Bearing Section 230 Reform Proposals
As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google "got wrong" in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much -- and either way they will demand to know "why" individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We've already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it's literally impossible to have a perfectly moderated platform at the scale of humankind.That said, one thing to note about these hearings is that each time, Facebook's CEO Mark Zuckerberg inches closer to pushing Facebook's vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to "punish" Facebook). That's not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.And, for tomorrow's hearing, he's driving the knife further into 230's back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:
Verizon Again Doubles Down On Yahoo After 6 Years Of Failure
You might recall that Verizon's attempt to pivot from grumpy old telco to sexy new Millennial ad brand hasn't been going so well. Oddly, mashing together two failing 90s brands in AOL and Yahoo, and renaming the coagulated entity "Oath," didn't really impress many people. The massive Yahoo hack, a controversy surrounding Verizon snoopvertising, and the face plant by the company's aggressively hyped Go90 streaming service (Verizon's attempts to make video inroads with Millennials) didn't really help.By late 2018 Verizon was forced to acknowledge that its Oath entity was effectively worthless. By 2019, Verizon wound up selling Tumblr to WordPress owner Automattic at a massive loss after a rocky ownership stretch. Throughout all of this, Verizon has consistently pretended that this was all part of some amazing, master plan.Those claims surfaced again this week with Verizon announcing that the company would be doubling and tripling down on the Yahoo experiment. For one, the company is launching Yahoo Shops, "a new marketplace destination featuring a curated, native shopping experience tailored to the user including innovative tech, from shoppable video to 3D try-ons, and more." It's also shifting its business model to focus more on subscriptions through Yahoo Plus, hoping to add on to the 3 million people that, for some reason, subscribe to products like Yahoo Fantasy and Yahoo Finance.Again though, all of this sounds very much like unsurprising and belated efforts to mimic products and services that already exist. While surely somebody somewhere finds these efforts enticing, that this is the end result of its $4.48 billion Yahoo acquisition in 2017, and its 2015 $4.4 billion acquisition of AOL is just kind of...meh. It's in no way clear how Verizon intends to differentiate itself in the market, and people who cover telecom and media for a living continue to find Verizon's persistence both adorable and amusing:
Daily Deal: Way Pro No Code Landing Page Builder
Creating a site has been made easier with Way Pro. This landing page builder is an easy, no-code, and component-ready platform that helps you execute lead generation campaigns faster. It comes with tons of templates and components that make your page beautiful and powerful. With a responsive design, Way Pro makes every element and section mobile-ready. Its quick setup only takes less than 30 seconds to publish a landing page and start getting results. You can also easily export your leads with a click. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
If Trump Ever Actually Creates A Social Network Of His Own, You Can Bet It Will Rely On Section 230
There have been rumors for ages that former President Donald Trump might "start" a social network of his own, and of course, that talk ramped up after he was (reasonably) banned from both Twitter and Facebook. Of course Trump is not particularly well known for successfully "starting" many businesses. Over the last few decades of his business career, he seemed a lot more focused on just licensing his name to other businesses, often of dubious quality. So it was no surprise when reports came out last month that, even while he was President, he had been in talks with Parler to join that site in exchange for a large equity stake in the Twitter-wannabe-for-Trumpists. For whatever reason, that deal never came to fruition.But, over the weekend, Trump spokesperson (and SLAPP suit filer) Jason Miller told Fox News that Trump was preparing to launch his own social network in the next few months. Amusingly, right before Miller made this claim, he noted exactly what I had said about how Trump being banned from Twitter and Facebook wasn't censorship, since Trump could get all the press coverage he wanted:
Despite A Decade Of Complaints, US Wireless Carriers Continue To Abuse The Word 'Unlimited'
Way back in 2007, Verizon was forced to strike an agreement with the New York State Attorney General for falsely marketing data plans with very obvious limits as "unlimited." For much of the last fifteen years numerous other wireless carriers, like AT&T, have also had their wrists gently slapped for selling "unlimited" wireless service that was anything but. Despite this, there remains no clear indication that the industry has learned much of anything from the punishment and experience. Most of the companies whose wrists were slapped have, unsurprisingly, simply continued on with the behavior.The latest case in point is Boost Mobile, a prepaid wireless provider that was shoveled over to Dish Network as part of the controversial T-Mobile Sprint merger. For years the company has been selling prepaid "unlimited" data plans that aren't, by any definition of the word, unlimited. In part because once users hit a bandwidth consumption threshold (aka a "limit"), users find their lines slowed to around 2G speeds (somewhere around 128 kbps) for the remainder of the billing period.No regulators could be bothered to thwart this behavior, so it fell to the wireless industry's self-regulatory organization, The National Advertising Division (NAD), to dole out the wrist slaps this time. The organization last week told Boost that it should stop advertising its data plans as unlimited, after getting complaints from AT&T -- a company that spent a decade falsely advertising its plans as unlimited:
Sidney Powell Asks Court To Dismiss Defamation Lawsuit Because She Was Just Engaging In Heated Hyperbole... Even When She Was Filing Lawsuits
In January, Dominion Voting Systems sued former Trump lawyer Sidney Powell for defamation. The voting machine maker claimed the self-titled "Kraken" was full of shit -- and knowingly so -- when she opined (and litigated!) that Dominion had ties to the corrupt Venezuelan government and that it had rigged the election against Donald Trump by changing votes or whatever (Powell's assertions and legal filings were based on the statements of armchair experts and conspiracy theorists).Sidney Powell has responded to Dominion's lawsuit with what is, honestly, about the best defense she could possibly muster. And that defense is, "I have zero credibility when it comes to voting fraud allegations and certainly any reasonable member of the public would know that." From Powell's motion to dismiss [PDF]:
New Year, Same You: Twitch Releases Tools To Help Creators Avoid Copyright Strikes, Can't Properly Police Abuse
Readers here will remember that the last quarter of 2020 was a very, very bad time for streaming platform Twitch. It all started when the RIAA came calling on the Amazon-owned platform, issuing a slew of DMCA takedown notices over all sorts of music included in the recorded streams of creators. Instead of simply taking the content down and issuing a notice to creators, Twitch simply perma-deleted the content in question, with no recourse for a counternotice given to creators as an option. After an explosive backlash, Twitch apologized, but still didn't offer any clarity or tools for creators to understand what might be infringing content and what was being targeted. Instead, during its remote convention, Twitch only promised more information and tools in the coming months.Five months later, Twitch has finally informed its creators of the progress its made on that front: tools on the site to help creators remove material flagged as infringement and some more clarity on what is infringing.
Connecticut Legislature Offers Up Bill That Would Make Prison Phone Calls Free
A lot of rights just vanish into the ether once you're incarcerated. Some of this makes sense. You have almost no privacy rights when being housed by the state. Your cell can be searched and your First Amendment right to freedom of association can be curtailed in order to prevent criminal conspiracies from being implemented behind bars.But rights don't disappear completely. The government has an obligation to make sure you're cared for and fed properly -- something that rarely seems to matter to jailers.Treating people as property has negative outcomes. Not only are "good" prisoners expected to work for pennies a day, but their families are expected to absorb outlandish expenses just to remain in contact with their incarcerated loved ones. The government loves its paywalls and it starts with prison phone services.Cellphone adoption changed the math for service providers. After a certain point, customers were unwilling to pay per text message. And long distance providers realized they could do almost nothing to continue to screw over phone users who called people outside of their area codes. Some equity was achieved once providers realized "long distance" was only a figure of profitable speech and text messages were something people expected to be free, rather than a service that paid phone companies per character typed.But if you're in prison, it's still 1997. The real world is completely different but your world is controlled by companies that know how to leverage communications into a profitable commodity. As much as we, the people, apparently hate the accused and incarcerated, they're super useful when it comes to funding local spending. Caged people are still considered "taxpayers," even when they can't generate income or vote in elections.So, for years, we've chosen to additionally punish inmates by turning basic communication options into high priced commodities. And we've decided they don't have any right to complain, even when the fees are astronomical or prison contractors are either helping law enforcement listen in to conversations with their legal reps or making it so prohibitively expensive only the richest of us can support an incarcerated person's desire to remain connected to their loved ones.Connecticut legislators have had enough. Whether it will be enough to flip the status quo table remains to be seen. But, for now, a bill proposed by the Connecticut House aims to strip the profit from for-profit service providers, as well as the for-profit prisons that pad their budgets with kickbacks from prison phone service providers. (h/t Kathy Morse)
Techdirt Podcast Episode 275: The State Of Trust & Safety
For some reason, a lot of people who get involved in the debate about content moderation still insist that online platforms are "doing nothing" to address problems — but that's simply not true. Platforms are constantly working on trust and safety issues, and at this point many people have developed considerable expertise regarding these unique challenges. One such person is Alex Feerst, former head of Trust & Safety at Medium, who joins us on this weeks episode to clear up some misconceptions and talk about the current state of the trust and safety field.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
North Carolina Legislators Push Bill That Would Prevent Cops, Prosecutors From Charging Six-Year-Olds For Picking Flowers
This is today's law enforcement. While there are multiple societal and criminal problems that deserve full-time attention, our tax dollars are paying cops to turn our children into criminals. We don't have the luxury of pretending this isn't happening. Schools have welcomed cops into their confines, turning routine disciplinary problems into police matters.While there may be some schools plagued by actual violent criminal activity, the stories that most often rise to the surface are those that involve violence by (uniformed) adults being inflicted on children. And I don't just mean legal minors -- a group that usually involves anyone under the age of 18. We're talking actual kids.Here's a brief rundown of some notable cases involving "school resource officers, " a term that suggests these cops aren't actually just cops, but rather an integral part of the school disciplinary system. But when SROs deal with children, they treat children just like they treat hardened criminals.This is a post about cops in schools I put together back in 2013. In this one, students were arrested for engaging in a water balloon fight, a 14-year-old was arrested for wearing an NRA shirt, and a DC cop gave a 10-year-old a concussion for ditching out on his music class. That's the tip of the ugly iceberg covered in this post.But let's look at a few more incidents.
Senator Mark Warner Doesn't Seem To Understand Even The Very Basic Fundamentals Of Section 230 As He Seeks To Destroy It
On Monday morning, Protocol hosted an interesting discussion on Reimagining Section 230 with two of its reporters, Emily Birnbaum and Issie Lapowsky. It started with those two reporters interviewing Senator Mark Warner about his SAFE TECH Act, which I've explained is one of the worst 230 bills I've seen and would effectively end the open internet. For what it's worth, since posting that I've heard from a few people that Senator Warner's staffers are now completely making up lies about me to discredit my analysis, while refusing to engage on the substance, so that's nice. Either way I was curious to see what Warner had to say.The Warner section begins at 12 minutes into the video if you want to just watch that part and it's... weird. It's hard to watch this and not come to the conclusion that Senator Warner doesn't understand what he's talking about. At all. It's clear that some people have told him about two cases in which he disagrees with the outcome (Grindr and Armslist), but that no one has bothered to explain to him any of the specifics of either those cases, or what his law would actually do. He also doesn't seem to understand how 230 works now, or how various internet websites actually handle content moderation. It starts out with him (clearly reading off a talking point list put in front of him) claiming that Section 230 has "turned into a get out of jail free card for large online providers to do nothing for foreseeable, obvious and repeated misuse of their platform."Um. Who is he talking about? There are, certainly, a few smaller platforms -- notably Gab and Parler -- that have chosen to do little. But the "large online platforms" -- namely Facebook, Twitter, and YouTube -- all have huge trust & safety efforts to deal with very difficult questions. Not a single one of them is doing "nothing." Each of them has struggled, obviously, in figuring out what to do, but it's not because of Section 230 giving them a "get out of jail free card." It's because they -- unlike Senator Warner, apparently -- recognize that every decision has tradeoffs and consequences and error bars. And if you're too aggressive in one area, it comes back to bite you somewhere else.One of the key points that many of us have tried to raise over the years is that any regulation in this area should be humble in recognizing that we're asking private companies to solve big societal problems that governments have spent centuries trying, and failing, to solve. Yet, Warner just goes on the attack -- as if Facebook is magically why bad stuff happens online.Warner claims -- falsely -- that his bill would not restrict anyone's free speech rights. Warner argues that Section 230 protects scammers, but that's... not true? Scammers still remain liable for any scam. Also, I'm not even sure what he's talking about because he says he wants to stop scamming by advertisers. Again, scamming by advertisers is already illegal. He says he doesn't want the violation of civil rights laws -- but, again, that's already illegal for those doing the discriminating. The whole point of 230 is to put the liability on the actual responsible party. Then he says that we need Section 230 to correct the flaws of the Grindr ruling -- but it sounds like Warner doesn't even understand what happened in that case.His entire explanation is a mess, which also explains why his bill is a mess. Birnbaum asks Warner who from the internet companies he consulted with in crafting the bill. This is actually a really important question -- because when Warner released the bill, he said that it was developed with the help of civil rights groups, but never mentioned anyone with any actual expertise or knowledge about content moderation, and that shows in the clueless way the bill is crafted. Warner's answer is... not encouraging. He says he talked with Facebook and Google's policy people. And that's a problem, because as we recently described, the internet is way more than Facebook and Google. Indeed, this bill would help Facebook and Google by basically making it close to impossible for new competitors to exist, while leaving the market to those two. Perhaps the worst way to get an idea of what any 230 proposal would do is to only talk to Facebook and Google.Thankfully, Birnbaum immediately pushed back on that point, saying g that many critics have noted that smaller platforms would inevitably be harmed by Warner's bill, and asking if Warner had spoken to any of these smaller platforms. His answer is revealing. And not in a good way. First, he ignores Birnbaum's question, and then claims that when Section 230 was written it was designed to protect startups, and that now it's being "abused" by big companies. This is false. And Section 230's authors have said this is false (and one of them is a colleague of Warner's in the Senate, so it's ridiculous that he's flat out misrepresenting things here). Section 230 was passed to protect Prodigy -- which was a service owned by IBM and Sears. Neither of those were startups.
Daily Deal: Mini Wipebook Scan (2-Pack)
What do you get when you cross a whiteboard and a notebook? Wipebook’s technology transforms conventional paper into reusable and erasable surfaces. It has 10 double sided pages or 20 surfaces: 10 graph and 10 ruled. It's the perfect tool for thinkers, doers, and problem solvers. Use the Mini Wipebook to work things out, save to the cloud, and wipe old sketches completely clean. The Wipebook Scan App saves your work and uploads it to your favorite cloud services like Google Drive, Evernote, Dropbox, and OneDrive. This 2-pack is on sale for $52.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
What I Hope Tech CEOs Will Tell Congress: 'We're Not Neutral'
The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here’s what I hope they will say:Thank you Mister Chairman and Madam Ranking Member.While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role -- in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don’t remove enough incendiary content. Our platforms see millions of user-generated posts every day -- on a global scale -- but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.As a company we could surely do a better job of explaining -- privately and publicly -- our calls in controversial cases. Because it’s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I’ll focus my remarks on how we view content moderation generally.Not “neutral”In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies’ economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I’ve come to believe that it can sometimes leave a false impression of how we operate.So today I’d like to take a new approach: leveling with you.In particular, in the past I have told you that our service is “neutral.” My intent was to convey that we don’t pick political sides, or allow commercial influence over our editorial content.But I’ve come to believe that characterizing our service as “neutral” was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.Our philosophyIn general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective -- but scientifically informed -- judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.We aspire for our services to be utilized by billions of people around the globe, and we don’t ever relish limiting anyone’s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world -- so we must abide by more restrictive speech laws in many countries where we operate.Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.We make these decisions based not on the law, but on what kind of service we want to be for our users.While some people claim to want “neutral” online speech platforms, we have seen that services with little or no content moderation whatsoever -- such as Gab and Parler -- become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an “anything goes'' cesspool.We realize that some people won’t like our rules, and go elsewhere. I’m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.Complicated info landscape means tough callsWhen we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don’t feel comfortable with our platform being an instrument to undermine democracy or public health.As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we’ve never seen before, or involve elected officials -- so the questions often end up on my desk as CEO.Despite the popularity of our services, I recognize that I’m not a democratically elected policymaker. I’m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base - or world leaders. We may make the wrong call.But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it’s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.Long-term experience over short-term trafficSome of our critics assert that we are driven solely by “engagement metrics” or “monetizing outrage” like heated political speech.While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people’s long-term trust and desire to return to our service. People would give up on our service if it’s not making them happy.We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet “stack” -- from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs -- most people support a norm of progressively less content moderation at each layer.In other words, our users may not want to see controversial speech on our service -- but they don’t necessarily support disappearing it from the Internet altogether.I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn’t mean that the US government can or should deny our company’s freedom to moderate our own services.First Amendment and CDA 230Some have suggested that social media sites are the “new public square” and that services should be forbidden by the government to block anyone’s speech. But such a rule would violate our company’s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.Some critics attack Section 230 of the Communications Decency Act as a “giveaway” to tech companies, but their real beef is with the First Amendment.Others allege that Section 230’s liability protections are conditioned on our service following a false standard of political “neutrality.” But Section 230 doesn’t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.Section 230 is primarily a legal routing mechanism for defamation claims -- making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.For example, if car rental companies always paid for their renters’ red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially “controversial” speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.Operating responsibly -- and humblyWe’re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.We value the public’s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this “working the refs,” but if done respectfully I think it can be healthy, constructive, and enlightening.By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.Thank you for the opportunity to appear here today.Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.
Yet More Studies Show That 5G Isn't Hurting You
On the one hand, you have a wireless industry falsely claiming that 5G is a near mystical revolution in communications, something that's never been true (especially in the US). Then on the other hand you have oodles of internet crackpots who think 5G is causing COVID or killing people on the daily, something that has also never been true. In reality, most claims of 5G health harms are based on a false 20 year old graph, and an overwhelming majority of scientists have made it clear that 5G is not killing you (in fact several incarnations are less powerful than 4G).Last week, more evidence emerged that indicates that no, 5G isn't killing you. Researchers from the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) and the Swinburne University of Technology in Australia both released studies last week in the Journal of Exposure Science and Environmental Epidemiology. Both studies are among the first to look exclusively at 5G, and the only people who'll be surprised by their findings get all of their news from email forwards and YouTube. From an ARPANSA press statement on its first study's findings:
Sharyl Attkisson Lawsuit Against Rod Rosenstein Claiming She Was Hacked By Government Tossed
Remember Sharyl Attkisson? If not, she is a former CNN and CBS journalist who made something of a name for herself both in reporting on the Obama administration, often critically, as well as for accusing that same administration of hacking into her computer and home network. Whatever you think of her reporting, her lawsuit against Eric Holder and the Justice Department over the hacking claims was crazy-pants. Essentially, she took a bunch of the same technological glitches all of us deal with on a daily basis -- flickering television screens, a stuck backspace key on her computer -- and wove that into a giant conspiracy against her and her reporting. She made a big deal in the suit, and her subsequent book on the matter, over some "computer experts" she relied on to confirm that she was a victim of government hacking, except those experts remained largely anonymous and were even, in some cases, third party people she'd never met. For that and other reasons related to how quickly she managed to do initial discovery, the case was tossed by the courts in 2019.That didn't stop Attkisson's crusade against the government, however. In 2020, she filed suit against Rod Rosenstein, again accusing the government of spying on her and her family. To back this up, she again relied on an anonymous source, but that source has since been revealed. And, well...
It's The End Of Citation As We Know It & I Feel Fine
Legal scholarship sucks. It’s interminably long. It’s relentlessly boring. And it’s confusingly esoteric. But the worst thing about legal scholarship is the footnotes. Every sentence gets one. Banal statement of historical fact? Footnote. Recitation of hornbook law? Footnote. General observation about scholarly consensus? Footnote. Original observation? Footnote as well, I guess.It’s a mess. In theory, legal scholarship should be free as a bird. After all, it’s one of the only academic disciplines to have avoided peer review. But in practice, it’s every bit as formalistic as any other academic discipline, just in a slightly different way. You can check out of Hotel Academia, but you can’t leave.Most academic disciplines use peer review to evaluate the quality of articles submitted for publication. In a nutshell, anonymous scholars working in the same area read the article and decide whether it’s good enough to publish. Sounds great, except for the fact that the people reviewing an article have a slew of perverse incentives. After all, what if the article makes arguments you dislike? Even worse, what if it criticizes you? And if you are going to recommend publication, why not insist on citations to your own work? After all, it’s obviously relevant and important.But the problems with peer review run even deeper. For better or worse, it does a pretty good job of ensuring that articles don’t jump the shark and conform to the conventional wisdom of the discipline. Of course, conformity can be a virtue. But it can also help camouflage flaws. Peer review is good at catching outliers, but not so good at catching liars. As documented by websites like Retraction Watch, plenty of scholars have sailed through the peer review process by just fabricating data to support appealing conclusions. Diederik Stapel, eat your heart out!Anyway, legal scholarship is an outlier, because there’s no peer review. Of course, it still has gatekeepers. But unusually, the people deciding which articles to publish are students, not professors. Why? Historical accident. Law was a profession long before it became an academic discipline, and law schools are a relatively recent invention. Law students invented the law review in the late 19th century, and legal scholars just ran with it.Asking law students to evaluate the quality of legal scholarship and decide what to publish isn’t ideal. They don’t know anything about legal scholarship. They don’t even know all that much about the law yet. But they aren’t stupid! After all, they’re in law school. So they rely on heuristics to help them decide what to publish. One important heuristic is prestige. The more impressive the author’s credentials, the more promising the article. Or at least, chasing prestige is always a safe choice, a lesson well-observed by many practicing lawyers as well.Another key heuristic is footnotes. Indeed, footnotes are almost the raison d’etre of legal scholarship. An article with no footnotes is a non-starter. An article with only a few footnotes is suspect. But an article with a whole slew of footnotes is enticing, especially if they’re already properly Bluebooked. After all, much of the labor of the law review editor is checking footnotes, correcting footnotes, adding footnotes, and adding to footnotes. So many footnotes!Most law review articles have hundreds of footnotes. Indeed, the footnotes often overwhelm the text. It’s not uncommon for law review articles to have entire pages that consist of nothing but a footnote.It’s a struggle. Footnotes can be immensely helpful. They bolster the author’s credibility by signaling expertise and point readers to useful sources of additional information. What’s more, they implicitly endorse the scholarship they cite and elevate the profile of its author. Every citation matters, every citation is good. But how to know what to cite? And even more vexing, how to know when a citation is missing? So much scholarship gets published, it’s impossible to read it all, let alone remember what you’ve read. It’s easy to miss or forget something relevant and important. Legal scholars tend to cite anything that comes to mind and hope for the best.There’s gotta be a better way. Thankfully, in 2020, Rob Anderson and Trent Wenzel created ScholarSift, a computer program that uses machine learning to analyze legal scholarship and identify the most relevant articles. Anderson is a law professor at Pepperdine University Caruso School of Law and Wenzel is a software developer. They teamed up to produce a platform intended to make legal scholarship more efficient. Essentially, ScholarSift tells authors which articles they should be citing, and tells editors whether an article is novel.It works really well. As far as I can tell, ScholarSift is kind of like Turnitin in reverse. It compares the text of a law review article to a huge database of law review articles and tells you which ones are similar. Unsurprisingly, it turns out that machine learning is really good at identifying relevant scholarship. And ScholarSift seems to do a better job at identifying relevant scholarship than pricey legacy platforms like Westlaw and Lexis.One of the many cool things about ScholarSift is its potential to make legal scholarship more equitable. In legal scholarship, as everywhere, fame begets fame. All too often, fame means the usual suspects get all the attention, and it’s a struggle for marginalized scholars to get the attention they deserve. Unlike other kinds of machine learning programs, which seem almost designed to reinforce unfortunate prejudices, ScholarSift seems to do the opposite, highlighting authors who might otherwise be overlooked. That’s important and valuable. I think Anderson and Wenzel are on to something, and I agree that ScholarSift could improve citation practices in legal scholarship.But I also wonder whether the implications of ScholarSift are even more radical than they imagine? The primary point of footnotes is to identify relevant sources that readers will find helpful. That’s great. And yet, it can also be overwhelming. Often, people would rather just read the article, and ignore the sources, which can become distracting, even overwhelming. Anderson and Wenzel argue that ScholarSift can tell authors which articles to cite. I wonder if it couldn’t also make citations pointless. After all, readers can use ScholarSift, just as well as authors.Maybe ScholarSift could free legal scholarship from the burden of oppressive footnotes? Why bother including a litany of relevant sources when a computer program can generate it automatically? Maybe legal scholarship could adopt a new norm in which authors only cite works a computer wouldn’t flag as relevant? Apparently, it’s still possible. I recently published an essay titled “Deodand.” I’m told that ScholarSift generated no suggestions about what it should cite. But I still thought of some. The citation is dead; long live the citation.Brian L. Frye is Spears-Gilbert Professor of Law at the University of Kentucky College of Law
Drone Manufacturers Are Amping Up Surveillance Capabilities In Response To Demand From Government Agencies
The CBP loves its drones. It can't say why. I mean, it may lend them out to whoever comes asking for one, but there's very little data linking hundreds of drone flights to better border security. Even the DHS called the CBP's drone program an insecure mess -- one made worse by the CBP's lenient lending policies, which allowed its drones to stray far from the borders to provide dubious assistance to local law enforcement agencies.The CBP's thirst for drones -- with or without border security gains -- is unslakeable. Thomas Brewster reports for Forbes that the agency is very much still in the drone business. It may no longer be using Defense Department surplus to fail at doing its job, but it's still willing to spend taxpayer money to achieve negligible gains in border security. And if the new capabilities present new constitutional issues, oh well.
Cop's Lies About A Traffic Stop Are Exposed By A Home Security Camera Located Across The Street
Cops lie.This is undeniable. But why do cops lie? There seems to be little reason for it. Qualified immunity protects them against all but their most egregious rights violations. Internal investigations routinely clear them for all but their most egregious acts of misconduct. And police union contracts make it almost impossible to fire bad cops, no matter what they've done.So, why do they lie? If I had to guess, it's because they've been granted so much deference by those adjudicating their behavior that "my word against theirs" has pretty much become the standard for legal proceedings. If a cop can push a narrative without more pushback than the opposing party's sworn statements, the cop is probably going to win.This reliance on unreliable narrators has been threatened by the ubiquity of recording devices. Some devices -- body cameras, dashcams -- are owned by cops. And, no surprise, they often "fail" to activate these devices when some shady shit is going down.But there are tons of cameras cops don't control. Every smartphone has a camera. And nearly every person encountering cops has a smartphone. Then there's the plethora of home security cameras whose price point has dropped so precipitously they're now considered as accessible as tap water.The cops can control their own footage. And they do. But they can't control everyone else's. And that's where they slip up. A narrative is only as good as its supporting evidence. Cops refuse to bring their own, especially when it contradicts their narrative. But they can't stop citizens from recording their actions. This is a fact that has yet to achieve critical mass in the law enforcement community. A cop's word is only as good as its supporting facts. Going to court with alternative facts -- especially ones contradicted by nearby recording devices is a bad idea. (h/t TheUrbanDragon)But that still doesn't stop cops from lying to courts. Cops in Lake Wales, Florida tried to claim a driver attacked them during a traffic stop -- something that could have resulted in a conviction on multiple felony charges. But camera footage obtained from a home security camera across the street from the traffic stop undermined the officers' sworn perjury:
Senators Leahy And Tillis -- Both Strongly Supported By Hollywood -- Ask Merrick Garland To Target Streaming Sites
As you'll likely recall, at the very end of last year, Senator Thom Tillis, the head of the intellectual property subcommittee in the Senate, slipped a felony streaming bill into the grand funding omnibus. As we noted at the time, this bill -- which was a pure gift to Hollywood -- was never actually introduced, debated, or voted on separately. It was just introduced and immediately slipped into the omnibus. This came almost a decade after Senators had tried to pass a similar bill, connected to the SOPA/PIPA. You may even recall when Senator Amy Klobuchar introduced such a bill in 2011, Justin Bieber actually suggested that maybe Senator Klobuchar should be locked up for trying to turn streaming into a felony.Of course, this whole thing was a gift to the entertainment industry, who has been a big supporter of Senator Tillis. With the flipping of the Senate, now Senator Leahy has become the chair of the IP subcommittee. As you'll also likely recall, he was the driving force behind the PIPA half of SOPA/PIPA, and has also been a close ally of Hollywood. So close, in fact, that they give him a cameo in every Batman film. Oh, and his daughter is literally one of Hollywood's top lobbyists in DC.So I guess it's no surprise that Tillis and Leahy have now teamed up to ask new Attorney General Merrick Garland to start locking up those streamers. In a letter sent to Garland, they claim the following:
Daily Deal: The CompTIA Security Infrastructure Expert Bundle
In the CompTIA Security Infrastructure Expert Bundle, you'll get comprehensive preparation to sit four crucial CompTIA exams: Security+, CySA+, CASP, and PenTest+. You'll learn how to implement cryptographic techniques, how to analyze vulnerabilities, how to respond to cyber incidents with a forensics toolkit, and much more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Appeals Court Judge Attacks Fundamental Principle Of 1st Amendment Law, Because He Thinks The Media Likes Democrats Too Much
Two years ago, Supreme Court Justice Clarence Thomas shocked a lot of people by arguing -- somewhat out of nowhere -- that the Supreme Court should revisit the NY Times v. Sullivan ruling. If you're unaware, that 1964 ruling is perhaps the most important and fundamental Supreme Court ruling regarding the 1st Amendment. It's the case that established a few key principles and tests that are incredibly important in stopping vexatious, censorial SLAPP suits -- often by those in power, against those who criticize.Now, a DC Circuit appeals court judge -- and close friend of Thomas's -- is suggesting that the court toss that standard. And his reasons are... um... something quite incredible. Apparently, he's mad that the media and big tech are mean to Republicans, and he's worried that Fox News and Rupert Murdoch aren't doing enough to fight back against those evil libs, who are "abusing" the 1st Amendment to spew lies about Republicans. As you'll see, the case in question isn't even about the media, the internet, or Democrats/Republicans at all. It's about a permit in Liberia to drill for oil. Really. But there's some background to go through first.The key part of the Sullivan case is that, if the plaintiff is considered a "public figure," then they need to show "actual malice" to prove defamation. The actual malice standard is widely misunderstood. As I've heard it said, "actual malice" requires no actual malice. It doesn't mean that the person making the statements really dislikes who they're talking about. It means that the person making the statements knew that the statements were false, or made the statements "with reckless disregard for the truth." Once again, "reckless disregard for the truth" has a specific meaning that is not what you might think. In various cases, the Supreme Court has made it clear that this means that the person either had a "high degree of awareness" that the statements are probably false or "entertained serious doubts as to the truth" of the statements. In other words, it's not just that they didn't do due diligence. It's that they did, found evidence suggesting the content was false, and then still published anyway.This is, obviously, a high bar to get over. But that's on purpose. That's how defamation law fits under the 1st Amendment (some might argue that defamation law itself should violate the 1st Amendment as it is, blatantly, law regarding speech -- but by limiting it to the most egregious situations, the courts have carved out how the two can fit together). Five years ago, 1st Amendment lawyer Ken White noted that there was no real concerted effort to change this standard, and it seemed unlikely that many judges would consider it.
Whistleblower Says AT&T Has Been Ripping Off US Schools For A Decade
In just the last five years or so AT&T has been: fined $18.6 million for helping rip off programs for the hearing impaired; fined $10.4 million for ripping off a program for low-income families; fined $105 million for helping "crammers" by intentionally making such bogus charges more difficult to see on customer bills; and fined $60 million for lying to customers about the definition of "unlimited" data. This is just a few of AT&T's adventures in regulatory oversight, and in most instances AT&T lawyers are usually able to lower the fines, or eliminate them entirely, after years of litigation.AT&T's latest scandal, like the rest of them, won't make many sexy headlines, but it's every bit as bad. Theodore Marcus, a lawyer at AT&T, emerged this week to accuse the telecom giant of systemically ripping off US schools via the FCC's E-Rate program. According to Marcus, this occurred for years, and tended to harm schools in the nation's most marginalized communities. And when he informed AT&T executives of this they... did nothing:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is Blake C. Stacey with a response to the return of the PACT Act, and especially its traffic thresholds for regulations:
Game Jam Winner Spotlight: Fish Magic
Today, we finish our journey through the winners of the third annual public domain game jam, Gaming Like It's 1925. We've covered ~THE GREAT GATSBY~, The Great Gatsby Tabletop Roleplaying Game, Art Apart and There Are No Eyes Here, Remembering Grußau, and Rhythm Action Gatsby, and now it's time for the final winner: Best Analog Game recipient Fish Magic by David Harris.David Harris is our one returning winner this year, having topped the same category in Gaming Like It's 1924 with the game The 24th Kandinsky. This year's entry is at once similar and very different: like that previous game, Fish Magic is about exploring the work of a famous painter, but it takes an entirely new approach to doing so. And that change of approach underlines what makes both games so compelling: their mechanics are carefully crafted to perfectly suit the artworks at their core. Where The 24th Kandinsky was about manipulating the shapes and colors of Kandinsky's abstract art, Fish Magic is about letting the evocative surrealism of the titular painting by Paul Klee spark your imagination. To that end, the painting becomes the game board, and is populated by words randomly selected from a list, poetically divided into the "domains" of Celestial, Earthly, and Aquatic:The players take turns moving between nodes on the board, taking a word from each one to build a collection, which they can then use to build phrases when they are ready. The goal is to convince the other players that your constructed phrase represents either a type of "magic fish", or a type of "fish magic". Points are gained by winning the support of other players for your fish magic or your magic fish, and reduced according to how many extra words you have sitting in your collection, thus encouraging players to be extra creative and find ways to make convincing phrases with the words they have, rather than just chasing the ones they want.If you're wondering what exactly makes for a good type of fish magic or magic fish, or what that even means — well, that's kind of the point, and exactly why this approach to the game is so perfect for the source material! Paul Klee's painting is appreciated for its magical depiction of a mysterious and intriguing underwater world, and the way its techniques — a layer of black paint scratched off to reveal vibrant colours underneath, and a square of muslin glued to the center of the canvas — suggest wondrous depths obscured by a hazy curtain. Fish Magic the painting provokes imagination and flights of fancy, and Fish Magic the game adds just enough mechanical scaffolding to make this process explicit and collaborative.Anyone could slap a board layout on a famous painting, add some rules, and call it a game — but it takes a real appreciation for the painting, and a real intent to do something meaningful with it, to craft such a simple premise that so perfectly aligns with the source material. Like The 24th Kandinsky last year, just a quick read of the rules was enough to make our judges eager to play, and it was an easy choice for the Best Analog Game.You can get all the materials for Fish Magic on Itch, and check out the other jam entries too. Congratulations to David Harris for the win!And that's a wrap on our series of winner spotlights for Gaming Like It's 1925. Another congratulations to all the winners, and a big thanks to every designer who submitted an entry. Keep on mining that public domain, and start perusing lists of works that will be entering the public domain next year when we'll be back with Gaming Like It's 1926!
Life Imitates Art: Warren Spector Says He Wouldn't Make 'Deus Ex' In Today's Toxic Climate
The Deus Ex franchise has found its way onto Techdirt's pages a couple of times in the past. If you're not familiar with the series, it's a cyberpunk-ish take on the near future with broad themes around human augmentation, and the weaving of broad and famous conspiracy theories. That perhaps makes it somewhat ironic that several of our posts dealing with the franchise have to do with mass media outlets getting confused into thinking its augmentation stories were real life, or the conspiracy theories that centered around leaks for the original game's sequel were true. The conspiracy theories woven into the original Deus Ex storyline were of the grand variety: takeover of government by biomedical companies pushing a vaccine for a sickness it created, the illuminati, FEMA takeovers, AI-driven surveillance of the public, etc.And it's the fact that such conspiracy-driven thinking today led Warren Spector, the creator of the series, to recently state that he probably wouldn't have created the game today if given the chance.
Content Moderation Case Study: Telegram Gains Users But Struggles To Remove Violent Content (2021)
Summary: After Amazon refused to continue hosting Parler, the Twitter competitor favored by the American far-right, former Parler users looking to communicate with each other -- but dodge strict moderation -- adopted Telegram as their go-to service. Following the attack on the Capitol building in Washington, DC, chat app Telegram added 25 million users in a little over 72 hours.Telegram has long been home to far-right groups, who often find their communications options limited by moderation policies that, unsurprisingly, remove violent or hateful content. Telegram's moderation is comparatively more lax than several of its social media competitors, making it the app of choice for far right personalities.But Telegram appears to be attempting to handle the influx of users -- along with an influx of disturbing content. Some channels broadcasting extremist content have been removed by Telegram as the increasingly-popular chat service flexes its (until now rarely used) moderation muscle. According to the service, at least fifteen channels were removed by Telegram moderators, some of which were filled with white supremacist content.Unfortunately, policing the service remains difficult. While Telegram claims to have blocked "dozens" of channels containing "calls to violence," journalists have had little trouble finding similarly violent content on the service, which either has eluded moderation or is being ignored by Telegram. While Telegram appears responsive to some notifications of potentially-illegal content, it also appears to be inconsistent in applying its own rule against inciting violence.Decisions to be made by Telegram:
Conspiratorial Attacks On Telecom Infrastructure Keep Getting Dumber And More Dangerous
On one side, you've got wireless carriers implying that 5G is some type of cancer curing miracle (it's not). On the other hand, we have oodles of conspiracy theorists, celebrities, malicious governments, and various grifters trying to claim 5G is some kind of rampant health menace (it's not). In reality, 5G's not actually interesting enough to warrant either position, but that's clearly not stopping anybody in the post-truth era.But it's all fun and games until somebody gets hurt.Over the last year or two, conspiracy theory-driven attacks in both the UK and US have ramped up not just on telecom infrastructure, but on telecom workers themselves. From burning down cellular towers to putting razor blades and needles under utility pole posters to injure workers, it's getting exceptionally dumb and dangerous. To the point where gangs of people have threatened telecom workers who don't even work in wireless.As the Intercept notes, the rise in attacks has finally gotten the attention of law enforcement. In New York, law enforcement has finally keyed into the fact that the conspiracy theories have fused white supremacists and Q Anon dipshittery into one problematic mess that's resulting in concrete harm. White supremacists (here and abroad) have apparently figured out they can amplify and contribute to the conspiracy theories to generate more chaos for the American institutions they're eager to demolish. All stuff that's being amplified in turn by governments like Iran and Russia eager for the same outcome.While superficially a lot of these folks have the coherence of mud, in many cases the attacks are very elaborate, and specifically targeted:
...171172173174175176177178179180...