Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-20 05:46
Government's 'Reverse' Warrant Rejected By Two Consecutive Federal Judges
The government doesn't always get what it wants. A novel twist on mass surveillance -- the so-called "reverse" warrant -- is becoming more popular now that law enforcement has realized Google maintains a stockpile of cell location data.Reverse warrants are just that: completely backwards. Cops don't have a suspect to target. All they have is a crime scene. Using location data allows them to work backwards to a list of suspects. Officers geofence an area around the crime scene and head to Google to ask for all information on cellphones in that area during the time the crime was committed. This treats everyone in the area as a suspect until investigators have had a chance to dig through their data to narrow down the list.Warrants are supposed to have a certain amount of particularity. These warrants have none. All they have are some coordinates and a clock. Fortunately, as the EFF reports, some judges are pushing back.
Daily Deal: The Hardcore Game Development And Animation Bundle
The Hardcore Game Development and Animation Bundle has 6 courses to help you learn how to create your own video games. You'll learn the basics of game design, of using Forager iOS, of character modeling for games, and more. Courses cover popular software programs for 3D game animation like Zbrush, PBR, Maya, Substance, Unity, and Unreal. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
White House Supposedly Blocked Walmart From Buying Tiktok Because It Would Prove Its Rationale For Forcing A Deal Was Bullshit
Among the rumors of who might take over TikTok (which the Trump administration is forcing ByteDance to sell) was the surprise entrant of Walmart. While we're still waiting for the official decision, a report last week noted that the White House stepped in to tell Walmart that couldn't happen if Walmart was to be the lead buyer:
FBI Horrified To Discover Ring Doorbells Can Tip Off Citizens To The Presence Of Federal Officers At Their Door
Ring's camera/doorbells may as well be branded with local law enforcement agency logos. Since Amazon acquired the company, Ring has cornered the law enforcement-adjacent market for home security products, partnering with hundreds of agencies to get Ring's products into the hands of residents. A lot of this flows directly through police departments, which can get them almost for free as long as they push citizens towards using Ring's snitch app, Neighbors, and allow Ring to handle the PR work.So, it's hilarious to find out the FBI is concerned about Ring cameras, considering the company's unabashed love for all things law enforcement. The Intercept -- diving back into the "Blue Leaks" stash of exfiltrated law enforcement documents -- has posted an FBI "Technical Analysis Bulletin" [PDF] warning cops about the threat Ring cameras pose to cops. After celebrating the golden age of surveillance the Internet of Things has ushered in, the FBI notes that doorbell cameras see everyone who comes to someone's door -- even if it's people who'd rather the absent resident remained unaware of.
New Gear For Section 230 Fans: Otherwise Objectionable
Get your Otherwise Objectionable gear in the Techdirt store on Threadless »If Section 230(c)(1) contains "the twenty-six words that created the internet", then (c)(2) contains the words that gave them some critical help. Among those words are two that are especially important, "otherwise objectionable", as they turn a limited list of specific content that can be removed into an open-ended protection for platform operators to moderate as they choose — and now you can wear them proudly with our new gear on Threadless.As usual, there's a wide variety of gear available in this and other designs — including t-shirts, hoodies, notebooks, buttons, phone cases, mugs, stickers, and of course the now-standard face masks. Check out all our designs and items in the Techdirt store on Threadless!
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, both our winners on the insightful side are folks expressing their doubt about our Greenhouse guest post on thoughtfully regulating the internet. In first place, it's an anonymous commenter focusing on the various interests at play:
This Week In Techdirt History: August 30th - September 5th
Five Years AgoThis week in 2015, the NSA was renewing its bulk records collection after a worrying and slightly suspicious court ruling. The FBI was somehow using Hurricane Katrina as an excuse to get more Stingray devices, just before the Wall Street Journal got a "win" (though the devil was in the details) in a lawsuit related to Stingray surveillance orders, and the DOJ told federal agents that they need warrants to use the devices. Meanwhile, the NYPD was volunteering to be copyright cops in Times Square, Sony was downplaying the damage done by the same hack it was hyping up before, and the entertainment industry was freaking out about Popcorn Time.Ten Years AgoThis week in 2010, we were saddened to see the US Commerce Secretary siding with the RIAA and telling ISPs to become copyright cops, even as more ISPs were stepping up to fight subpoenas from the US Copyright Group (and in France, some ISPs were fighting back against Hadopi, which was also becoming a tool of scammers). One court refused to dismiss a Righthaven lawsuit involving a copyright that was bought after the alleged infringement happened, while another court was seeking ways to minimize a Righthaven win with minuscule damages — and the LVRJ was defending the Righthaven suits and mocking a competitor for criticizing them.Fifteen Years AgoThis week in 2005, we were pleased to see that the judge in one of the first instances of someone fighting back against RIAA lawsuits seemed to recognize the issues, and less pleased to see another court give its assent to yet another form of DMCA abuse. It wasn't as crazy as what was happening in India, though, where it appeared that their equivalent of the MPAA got an open search warrant for the entire city of New Delhi to look for pirated movies. And even that didn't match the panic over mobile porn that was gripping parts of the world, leading to things like Malaysian police performing random porn spot-checks on people's phones.
Students, Parents Figure Out School Is Using AI To Grade Exams And Immediately Game The System
With the COVID-19 pandemic still working its way through the United States and many other countries, we've finally arrived at the episode of this apocalypse drama where school has resumed (or will be shortly) for our kids. It seems that one useful outcome of the pandemic, if we're looking for some kind of silver lining, is that it has put on full display just how inept we are as a nation in so many ways. Federal responses, personal behavior, our medical system, and our financial system are all basically getting failing grades at every turn.Speaking of grades, schools that are now trying to suddenly pull off remote learning for kids are relying on technology to do so. Unfortunately, here too we see that we simply weren't prepared for this kind of thing. Aside from all of the other complaints you've probably heard or uttered yourselves -- internet connections are too shitty for all of this, teachers aren't properly trained for distance learning, the technology being handed out by schools mostly sucks -- we can also add to that unfortunate attempts by school districts to get AI to grade exams.This story begins with a parent seeing her 12 year old son, Lazare Simmons, fail a virtual exam. Taking an active role, Dana Simmons went on to watch her son complete more tests and assignments using the remote learning platform the school had set students up on, Edgenuity. While watching, it became quickly apparent how the platform was performing its scoring function.
Content Moderation Case Studies: Stopping Malware In Search Leads To Unsupported Claims Of Bias (2007)
Summary:As detailed in a thorough oral history in Wired back in 2017, it’s hard to overstate the importance of Google’s Safe Browsing blocklist effort that began as a project in 2005, but really launched in 2007. The effort was in response to a recognition that there were malicious websites out there that were attempting to trick people into visiting in order to install various forms of malware. Google’s Safe Browsing list, and its corresponding API (used by pretty much every other major browser, including Safari, Firefox and more) has become a crucial part of stopping people from being lured to dangerous websites that may damage or compromise their computers.Of course, as with any set of filters and blocklists, questions are always raised about the error rate, and whether or not you have too many false positives (or false negatives). And, not surprisingly, when sites are added to the blocklist, many website operators become upset. Part of the problem was that, all too often, the websites had become compromised without the operator knowing about it -- leading them to claim they were falsely being blocked. From the oral history:
The Next Register Of Copyrights Must Realize That Copyright Serves The Public
Mike has written many times on this website about various shenanigans at the Copyright Office. An obscure government agency to many, the Copyright Office actually has a huge influence over copyright policy and law, from Congress to the courts. With word that the appointment of a new Register of Copyrights is imminent, this is an opportunity to fix many of the challenges with the agency.The Copyright Office was originally established as part of the Library of Congress to register works back when formal registration with the Office was required under Statute to receive copyright protection This registration requirement was created as a way to get a deposit copy for the Library of these works. The goal was to not only have copies for recordation purposes, but to create a vast library.However, over the last 50 years, the role of the Copyright Office has greatly changed with the law. The formal registration requirement was ended, first requiring only that published works contain a copyright notice and then eventually expanding federal copyright protection to all works, published and unpublished, once they are fixed in a tangible form. Today registration with the Office is not required, but does provide certain statutory defined benefits. The Office was also given more and more copyright policy and law responsibilities. The result is that the Office has become much more of a policy and regulatory quasi-agency instead of its original role as part of a library and place to register works for federal copyright protection.The Register of Copyrights runs the Copyright Office. This is an outdated title, as while you still can register works at the Office, the role of the Register is much more to provide policy and legal expertise to the rest of government on copyright, overseeing the DMCA 1201 triennial review, and additional to other important roles. This need means that the Office attracts many copyright attorneys and policy experts. Unfortunately, it has been a long time since the Register was not previously an attorney for traditional rights holders, and they often go back to work for traditional rightsholders after they leave government service. The last two heads are now the General Counsel of the Motion Picture Association and the head of the American Association of Publishers. A former Register was reportedly fired for not properly administering the basic functions of the office, gross negligence in the stewardship of taxpayer dollars and lying about this because all she cared about was fighting for traditional rightsholders via the policy side of the office consolidating power by separating the office from the Library. Most of the senior staff also move on to jobs representing traditional rightsholders (with just a few exceptions) after their time at the office.The Copyright Office is seen by many as the lead on U.S. copyright policy, advising the government on everything from approaches to appellate court cases and trade agreement language to making suggestions on changes to Section 512 of the DMCA in its recent report. Based on another recommendation from the Office, members of Congress are trying to pass the CASE Act to hand over much of the judicial function in enforcing copyright law to the Office to be decided in quasi-judicial proceedings. This is especially bad because the Office’s 512 report was basically an attack on how the courts are getting DMCA wrong almost every time they decide against rightsholders. How can we trust the Office to follow what current law is based on these court decisions, when they have openly rejected these decisions?The Copyright Office has seen itself as an advocate for traditional rightsholders for most of the last 50 years in its new and expanded policy and regulatory role. Former Register Maria Pallante made this point clear in testimony before the Senate Judiciary Intellectual Property Subcommittee:
Intermediary Liability And Responsibilities Post-Brexit
This is a peculiar time to be an English lawyer. The UK has one foot outside the EU, and (on present intentions) the other foot will join it when the current transitional period expires at the end of 2020.It is unclear how closely tied the UK will be to future EU law developments following any trade deal negotiated with the EU. As things stand, the UK will not have to implement future EU legislation, and is likely to have considerable freedom in many areas to depart from existing EU legislation.The UK government has said that it has no plans to implement the EU Copyright Directive adopted in April 2019. Nor does it seem likely that it would have to follow whatever legislation may result from the European Commission's proposals for an EU Digital Services Act. Conversely, the government has also said that it has no current plans to change the existing intermediary liability provisions of the EU Electronic Commerce Directive, or the Directive's approach to prohibition of general monitoring obligations.Looking across the Atlantic, there is the prospect of a future trade agreement between the UK and the USA. That has set off alarm bells in some quarters that the US government will want the UK to adopt an intermediary liability shield modeled on S.230 Communications Decency Act.Domestically, the UK government is developing its Online Harms plans. The proposed legislation would impose a legal duty on user generated content-sharing intermediaries and search engines to prevent or inhibit many varieties of illegal or harmful UGC. Although branded a duty of care, the proposal is more akin to a broadcast-style content regulatory regime than to a duty of care as a tort lawyer would understand it. The regime would most likely be managed and enforced by the current broadcast regulator, Ofcom. As matters stand the legislation would not define harm, leaving Ofcom to decide (subject to some specific carve-outs) what should be regarded as harmful.All this is taking place against the background of the techlash. This is not the place to get into the merits and demerits of that debate. The aim of this piece is to take an educational ramble around the UK and EU legal landscape, pausing en route to inspect and illuminate some significant features.Liability Versus ResponsibilitiesThe tour begins by drawing a distinction between liability and responsibilities.;In the mid-1990s the focus was mostly on liability: the extent to which an intermediary can be held liable for unlawful activities and content of its users. The US and EU landmarks were S.230 CDA 1996 and S.512 DMCA 1998 (USA), and Articles 12 to 14 of the Electronic Commerce Directive 2000 (EU).Liability presupposes the user doing something unlawful on the intermediary's platform. (Otherwise, there is nothing for the intermediary to be liable for.) The question is then whether the platform, as well as the user, should be made liable for the user's unlawful activity – and if so, in what circumstances. The risk (or otherwise) of potential liability may encourage the intermediary to act in certain ways. Liability regimes incentivise, but do not mandate.Over time, the policy focus has expanded to take in responsibilities: putting an intermediary under a positive obligation to take action in relation to user content or activity.A mandatory obligation to prevent users behaving in particular ways is different from being made liable for their unlawful activity. Liability arises from a degree of involvement in the primary unlawful activity of the user. Imposed responsibility does not necessarily rest on a user's unlawful behavior. The intermediary is placed under an independent, self-standing obligation – one that it alone can breach.Responsibilities Imposed By Court OrdersResponsibilities first manifested themselves as mandatory obligations imposed on intermediaries by specific court orders, but still predicated on the existence of unlawful third party activities.In the US this development withered on the vine with SOPA/PIPA in 2012. Not so in the EU, where copyright site blocking injunctions can be (and have often been) granted against internet service providers under Article 8(3) of the InfoSoc Directive. The Intellectual Property Enforcement Directive requires similar injunctions to be available for other IP rights. In the UK it is established that a site blocking injunction can be granted based on registered trade marks, and potentially in respect of other kinds of unlawful activity.Limits to the actions that court orders can oblige intermediaries to take in respect of third party activities have been explored in numerous cases: amongst them, at EU Court of Justice level, detection and filtering of copyright infringing files in SABAM v Scarlet and SABAM v Netlog; detection and filtering of equivalent defamatory content in Glawischnig-Piesczek v Facebook; and worldwide delisting in Glawischnig-Piesczek v Facebook.Such court orders tend not to be conceptualized in terms of remedying a breach by the intermediary. Rather, they are based on efficiency: the intermediary, as a choke point, should be co-opted as being in the best position to reduce unlawful activity by third parties. In UK law at least, the intermediary has no prior legal duty to assist – only to comply with an injunction if the court sees fit to grant one.Responsibilities Imposed by Duties Of CareMost recently the focus on intermediary responsibilities has broadened beyond specific court orders. It now includes the idea of a prior positive obligation, imposed on an intermediary by the general law, to take steps to reduce risks arising from user activities on the platform.This kind of obligation, frequently labelled a duty of care, is contemplated by the UK Online Harms proposals and may form part of a future EU Digital Services Act.In the form in which it has been adapted for the online sphere, a duty of care would impose positive obligations on the intermediary to prevent users from harming other users (and perhaps non-users). Putting aside the vexed question of what constitutes harm in the context of online speech, a legal responsibility to prevent activities of third parties is far from the norm. A typical duty of care is owed in respect of someone's own acts, not to prevent acts of third parties.Although conceptually distinct from liability, an intermediary duty of care can interact and overlap with it. For example, a damages claim framed as breach of a duty of care may in some circumstances be barred by the ECD liability shields. In McFadden the rightsowner sought to hold a Wi-Fi operator liable for damages in respect of copyright infringement by users, founded on an allegation that the operator had breached a duty to secure its network. The CJEU found that the claim for damages was precluded by the Article 12 conduit shield, even though the claim was framed as breach of a duty rather than as liability for the users' copyright infringement as such.At the other end of the spectrum, the English courts have held that if a regulatory sanction is sufficiently remote from specific user infringements as not to be in respect of those infringements, the sanction is not precluded by the ECD liability shields. The UK Online Harms proposals suggest that sanctions would be for breach of systemic duties, rather than penalties tied to failure to remove specific items of content.Beyond UnlawfulnessAlthough intermediary liability is restricted to unlawfulness on the part of the user, responsibility is not. A self-standing duty of care is concerned with risk of harm. Harm may include unlawfulness, but is not limited to that.The scope of such a duty of care depends critically on what is meant by harm. In English law, comparable offline duties of care are limited to objectively ascertainable physical injury and damage to physical property. The UK Online Harms proposals jettison that limitation in favor of undefined harm. Applied to lawful online speech, that is a subjective concept. As matters stand Ofcom, as the likely regulator, would in effect decide what does and does not constitute harm.Article 15 ECommerce DirectiveA preventative duty of care takes us into the territory of proactive monitoring and filtering. Article 15 ECD, which sits alongside the liability scheme enacted in Articles 12 to 14, prohibits Member States from imposing two kinds of obligation on conduits, caches or hosts: a general obligation to monitor information transmitted or stored, and a general obligation actively to seek facts or circumstances indicating illegal activity.Article 15 does not on its face prohibit an obligation to seek out lawful but harmful activity, unless it constitutes a general obligation to monitor information. But in any event, for an EU Member State the EU Charter of Fundamental Rights would be engaged. The CJEU found the filtering obligations in Scarlet and Netlog to be not only in breach of Article 15, but also contrary to the EU Charter of Fundamental Rights. For a non-EU state such as the UK, the European Convention on Human Rights would be relevant.So far, the scope of Article 15 has been tested in the context of court orders. The principles established are nevertheless applicable to duties of care imposed by the general law, with the caveat that Recital (48) permits hosts to be made subject to "duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities." What those "certain types" might be is not stated. In any event the recital does not on the face of it apply to lawful activities deemed to be harmful.The Future Post-BrexitBoth the UK and the EU are currently heading down the road of imposing responsibilities on intermediaries, while professing to leave the liability provisions of the ECD untouched. That is conceptually possible for some kinds of responsibilities, but difficult to navigate in practice. Add the prohibition on general monitoring obligations and the task becomes harder, especially if the prohibition stems not just from the ECD (which could be diluted in future legislation) but from the EU Charter of Fundamental Rights and the ECHR.The French Loi Avia, very much concerned with imposing responsibilities, was recently partially struck down by the French Constitutional Council. Whilst no doubt it will return in a modified form, it is nevertheless a salutary reminder of the relevance of fundamental rights.As for UK-US trade discussions, Article 19.17 of the US-Mexico-Canada Agreement has set a precedent for inclusion of intermediary liability. Whether the wording of Article 19.17 really does mandate full S.230 immunity, as some have suggested, is another matter. Damian Collins MP, asking a Parliamentary Question on 2 March 2020, said:
E-Voting App Maker Voatz Asks The Supreme Court To Let It Punish Security Researchers For Exposing Its Flaws
Voatz has decided to weigh in on a Supreme Court case that could turn a lot of normal internet activity into a federal crime. At the center of this CFAA case is a cop who abused his access privileges to run unauthorized searches of law enforcement databases. The end result -- after a visit to the Eleventh Circuit Court of Appeals -- was a CFAA conviction for violating the system's terms of use.That's why this case is important. If the CFAA is interpreted this broadly, plenty of people become criminals. And it won't just be security researchers risking criminal charges simply by performing security research. It will also be everyone who lies to social media services about their personal info. Lawprof Orin Kerr's brief to the Supreme Court points out what a flat "no unauthorized use" reading would do to him.
America Needs To Stop Pretending The Broadband 'Digital Divide' Isn't The Direct Result Of Corruption
Last week, a tweeted photo of two kids huddled on the ground outside of a Taco Bell -- just to gain access to a reliable internet connection -- made the rounds on social media. The two found themselves on the wrong side of the "digital divide," forced to sit in the dirt just to get online, just 45 minutes from the immensely wealthy technology capital of the United States:
Another Florida Appeals Court Says Compelled Passcode Production Violates The Fifth Amendment
Things are getting pretty unsettled in Florida in terms of compelling the production of phone passcodes. Less than a half-decade ago, refusing to produce passwords netted people contempt charges. As these cases moved forward through the court system, the legal calculus changed. As it stands now, state appeals courts in two Florida districts have found that forcing people to give up passcodes violates the Fifth Amendment. But there's still some settling left to do and the First District has asked the state's top court to take a look at the issue.The latest development comes from Florida's Fifth District, where another state appeals court has reached the same conclusion as the others: passcodes are testimonial, and forcing people to turn them over implicates the Fifth Amendment. (via FourthAmendment.com)The case deals with some targeted vandalism and alleged stalking. Investigators feel the phone they found at the crime scene belongs to the suspect and contains evidence to support the aggravated stalking charges. (The victim also apparently found a GPS device attached to her car, presumably placed there by the suspect.)The decision [PDF] recounts the state's bizarre argument at the trial court level -- one that claimed demanding a passcode from the suspect was not an "intrusion."
Sony May Just Be Loosening The Reins As Gaming Brings In A Plurality Of Its Revenue
Any trip down Techdirt's memory lane when it comes to Sony is not going to leave you with a good taste in your mouth. This is a company that has been almost comically protective of all things intellectual property, engaged in all manner of anti-consumer behavior, and is arguably most famous for either using an update to remove features from its gaming console that generated sales of that console or for installing rootkits on people's computers. When it comes to any positive stories about the company, in fact, they mostly have to do with the immense success Sony had in the most recent Console Wars with its PlayStation 4 device.Positive results and gaming aren't a crosstab of coincidence for Sony, it seems. There are couple of converging stories about Sony, one dealing with its revenue and another with its plans for its gaming divisions opening up a bit, that point to positive developments. To set the stage, let's start with the fact that the video game industry is now the biggest revenue generator for Sony.
Sony May Just Be Loosening The Reins As Gaming Brings In A Plurality Of Its Revenue
Any trip down Techdirt's memory lane when it comes to Sony is not going to leave you with a good taste in your mouth. This is a company that has been almost comically protective of all things intellectual property, engaged in all manner of anti-consumer behavior, and is arguably most famous for either using an update to remove features from its gaming console that generated sales of that console or for installing rootkits on people's computers. When it comes to any positive stories about the company, in fact, they mostly have to do with the immense success Sony had in the most recent Console Wars with its PlayStation 4 device.Positive results and gaming aren't a crosstab of coincidence for Sony, it seems. There are couple of converging stories about Sony, one dealing with its revenue and another with its plans for its gaming divisions opening up a bit, that point to positive developments. To set the stage, let's start with the fact that the video game industry is now the biggest revenue generator for Sony.
Appeals Court Says Address Mistakes On Warrants Are Mostly Harmless, Not Worth Getting Excited About
In a case involving a drug bust utilizing a warrant with erroneous information, the Sixth Circuit Court of Appeals had this to say [PDF] about the use of boilerplate language and typographical errors:
It's Time To Regulate The Internet... But Thoughtfully
The internet policy world is headed for change, and the change that’s coming isn’t just a matter of more regulations but, rather, involves an evolution in how we think about communications technologies. The most successful businesses operating at what we have, up until now, called the internet’s “edge” are going to be treated like infrastructure more and more. What’s ahead is not exactly the “break them up” plan of the 2019 Presidential campaign of Senator Warren, but something a bit different. It’s a positive vision of government intervention to generate an evolution in our communications infrastructure to ensure a level playing field for competition; meaningful choices for end users; and responsibility, transparency, and accountability for the companies that provide economically and socially valuable platforms and services.We’ve seen evolutions in our communications infrastructure a few times before: first, when the telephone network became infrastructure for the internet protocol stack; again when the internet protocol stack became infrastructure for the World Wide Web; and then again when the Web became infrastructure on which key “edge” services like search and social media were built. Now, these edge services themselves are becoming infrastructure. And as a consequence, they will increasingly be regulated.Throughout its history, the “edge” of the internet sector has - for the most part - always enjoyed a light regulatory yoke, particularly in the United States. Many treated the lack of oversight as a matter of design, or even as necessarily inherent, given the differences between the timetables and processes of technology innovation and legislation. From John Perry Barlow’s infamous “Declaration of the Independence of Cyberspace” to Frank Easterbrook’s “Cyberspace and the Law of the Horse” to Larry Lessig’s “Code is law,” an entire generation of thinkers were inculcated in the belief that the internet was too complex to regulate directly (or too critical, too fragile, or, well, too “something”).We didn’t need regulatory change to catalyze the prior iterations of the internet’s evolution. The phone network was already regulated as a common carrier service, creating ample opportunity for edge innovation. And the IP stack and the Web were built as fully open standards, structurally designed to prevent the emergence of vertical monopolies and gatekeeping behavior. In contrast, from the get-go, today’s “edge” services have been dominated by private sector companies, a formula that has arguably helped contribute to their steady innovation and growth. At the same time, limited government intervention results in limited opportunity to address the diverse harms facing internet users and competing businesses.
Ninth Circuit Says NSA's Bulk Phone Records Collection Was Illegal, Most Likely Unconstitutional
The NSA's bulk phone records collection is dead. It died of exposure. And reform. It was Ed Snowden's first leak back in 2013. A few years later, a reform bill prompted by Snowden's leaks revamped the program, forcing the NSA to tailor its requests for phone records from telcos. The NSA used to collect everything and sort through at its leisure. But once the program eliminated the "bulk" from the NSA's bulk collection, the NSA couldn't figure out how to obtain records without getting more than it was legally allowed to take.This recent courtroom win may have come a bit too late to matter much. But it's still a big win. In a case involving material support for terrorists by Somali citizens living in the United States, the Ninth Circuit Court of Appeals has arrived at the conclusion that the NSA's bulk phone records collection is/was illegal.Here's the short summary from the court [PDF]:
Daily Deal: The Notion App Course
Notion is an all-in-one workspace for organizing your life. You can use it for managing tasks, studying, projects, notes, hobbies, and life goals. The Notion App Course will show you how to become more focused, organized and productive using the Notion app. It alsoincludes links to templates on life planning and getting things done that you can clone and personalize. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Copia Institute's Comment To The FCC Regarding The Ridiculous NTIA Petition To Reinterpret Section 230
In his post Mike called the NTIA petition for the FCC to change the enforceable language of Section 230 laughable. Earlier I called it execrable. There is absolutely nothing redeeming about it, or Trump's Executive Order that precipitated it, and it has turned into an enormous waste of time for everyone who cares about preserving speech on the Internet because it meant we all had to file comments to create the public record that might stop this trainwreck from causing even more damage.Mike's post discusses his comment. He wrote it from the standpoint of a small businessman and owner of a media website that depends on Section 230 to enable its comment section, as well as help spread its posts around the Internet and took on the myth that content moderation is something that should inspire a regulatory reaction.I also filed one, on behalf of the Copia Institute, consistent with the other advocacy we've done, including on Section 230. It was a challenge to draft; the NTIA petition is 57 pages of ignorance about the purpose and operation of the statute. There was so much to take issue with it was hard to pick what to focus on. But among the many misstatements the most egregious was its declaration on page 14 that:
AT&T Is Astroturfing The FCC In Support Of Trump's Dumb Attack On Social Media
We've noted for a long time that telecom giants like Comcast and AT&T have been pushing (quite successfully) for massive deregulation of their own monopolies, while pushing for significant new regulation of the Silicon Valley giants whose ad revenues they've coveted for decades. As such, it wasn't surprising to see AT&T come out with a incredibly dumb blog post this week supporting Trump's legally dubious and hugely problematic executive order targeting social media giants. You know, the plan that not only isn't enforceable by the agencies supposedly tasked with enforcing it (the FCC), but that also risks creating a massive new censorship paradigm across the entire internet.As Mike already noted, AT&T's post was a pile of bad faith nonsense, weirdly conflating net neutrality with the ham-fisted attack on Section 230. AT&T just got done deriding the FCC's relatively modest net neutrality rules as "government interference in the internet run amok." Yet here it is, advocating for a terrible plan that attempts to shovel the FCC into the role of regulating speech on social media, authority it simply doesn't have. For those that tracked the net neutrality fight, the intellectual calisthenics required here by folks like AT&T and its favorite FCC officials have been stunning, even for Trumpland:
Academic Study Says Open Source Has Peaked: But Why?
Open source runs the world. That's for supercomputers, where Linux powers all of the top 500 machines in the world, for smartphones, where Android has a global market share of around 75%, and for everything in between, as Wired points out:
Animal Crossing Continues To Be An Innovative Playground As Biden Campaign Begins Advertising On It
For nearly half a year now, especially when this damned pandemic really took off, we've been bringing you the occasional story of how Nintendo's Animal Crossing keeps popping up with folks finding innovative ways to use the game as a platform. Protesters advocating for freedom in Hong Kong gathered in the game. Sidelined reality show stars took to the game to ply their trade. Very real people enduring very real layoffs used the game's currency as a method for making very real money. As someone who has never played the game, the picture I'm left with is of a game that is both inherently malleable to what you want to do within it and immensely social in nature.So perhaps it was only a matter of time before one of the major Presidential candidates got involved.
Content Moderation Case Study: Amazon Alters Publishing Rules To Deter Kindle Unlimited Scammers (April 2016)
Summary:In July 2014, Amazon announced its "Netflix, but for ebooks" service, Kindle Unlimited. Kindle Unlimited allowed readers access to hundreds of thousands of ebooks for a flat rate of $9.99/month.Amazon paid authors from a subscriber fee pool. Authors were paid per page read by readers -- a system that was meant to reward more popular writers with a larger share of the Kindle Unlimited payment pool.This system was abused by scammers once it became clear Amazon wasn't spying on Kindle Users to ensure books were actually being read -- i.e., keeping track of time spent on pages of text by readers or total amount of time spent reading. Since Amazon had no way to verify if readers were actually reading the content, scammers deployed a variety of tricks to increase their unearned earnings.Part of the scam relied on Amazon's willingness to pay authors for partially-read books. If only 100 pages of a 500-page book were read, the author still got credit for the 100 pages read by an Unlimited user. Scammers inflated "pages read" counts by moving the table of contents to the end of the book or offering dozens of different languages in the same ebook, relying on readers skipping hundreds of pages into the ebook to access the most popular translation. Other scammers offered readers chances to win free products and gift cards via hyperlinks that brought readers to the end of the scammers' ebooks -- books that sometimes contained thousands of pages.The other part of the scam equation was Amazon's hands-off approach to self-publishing. Amazon has opened its platform and appears to do very little to police the content of ebooks, other than requiring authors to follow certain formatting rules. Amazon is neither a publisher nor an editor, which has created a market for algorithmically-generated content as well as a home for writers seeking a distribution outlet for their bigoted and hateful writing.Once Amazon realized the payout system was being gamed, it altered the way Kindle Unlimited operated. It began removing scammers, notifying authors and customers that it was doing this in response to Unlimited readers' complaints.
Techdirt Podcast Episode 254: Does Amazon Really Have A Data Advantage?
There's a lot of talk about tech companies and antitrust these days, and a great deal of the focus falls on Amazon. But is antitrust law really the right approach, or even capable of achieving the results many people want? This week, we're focusing on one specific complaint that comes up a lot, about Amazon being both a marketplace and a seller in that marketplace and gaining various advantages including, supposedly, from the data it has access to. We're joined by Greg Mercer, founder and CEO of Jungle Scout, to talk about whether Amazon really has a data advantage, and how much it really matters.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Content Moderation Best Practices for Startups
To say content moderation has become a hot topic over the past few years would be an understatement. The conversation has quickly shifted from how to best deal with pesky trolls and spammers  —  straight into the world of intensely serious topics like genocide and destabilization of democracies.While this discussion often centers around global platforms like Facebook and Twitter, even the smallest of communities can struggle with content moderation. Just a limited number of toxic members can have an outsize effect on a community’s behavioral norms.That’s why the issue of content moderation needs to be treated as a priority for all digital communities, large and small. As evidenced by its leap from lower-order concern to front-page news, content moderation is deserving of more attention and care than most are giving it today. As I see it, it’s a first-class engineering problem that calls for a first-class solution. In practical terms, that means providing:
My Comment To The FCC Regarding The Ridiculous NTIA Petition To Reinterpret Section 230
Today is the due date for the first round of submissions to the FCC's comment period on the NTIA's laughable petition, which asks the agency to reinterpret Section 230 in response to the President's temper tantrum about Twitter fact checking him. This is clearly outside of its regulatory authority, but it has caved and pandered to the President by calling for comments anyway.There are a ton of individuals and organizations commenting on why nearly everything around this is unconstitutional and/or outside the FCC's legal authority. The Copia Institute is filing a comment along those lines written by Cathy Gellis, and she'll have a post about that later. However, I wanted to file a separate comment from my own personal perspective about Section 230 and the nature of running a small media website that relies heavily on its protections. Because beyond the various filings from lawyers about this or that specific aspect of the law or Constitutional authority, it appeared that there was little discussion of just how illiterate the NTIA petition is concerning how content moderation works. And, tragically, many of the early filers on the docket were people who were screaming that because some content of theirs had been moderated by a social media company, the FCC must neuter Section 230.The key part of my comment is to reinforce the idea that content moderation is impossible to do well and you will always have some people who disagree with the results, and there will also be many "mistakes" because that's the nature of content moderation. It is not evidence of bias or censorship. And, indeed, as my comment highlights, changing Section 230 to try to deal with these fake problems is only likely to lead to the suppression of more speech and the shrinking of the open internet.Also, I talk about the time I wasn't kicked out of a lunch where I sat next to FCC Chairman Ajit Pai.You can read my entire comment below.If you would like to file a comment on the proceedings and have not yet done so, while the initial round of comments is due today, there is a second round for "responding" to comments made in the first round, which runs through September 17th.
Daily Deal: The Ultimate Artificial Intelligence Scientist Bundle
The Ultimate Artificial Intelligence Scientist Bundle consists of four courses covering Python, Tensorflow, Machine and Deep Learning. You will learn about complex theories, algorithms, coding libraries, Artificial Neural Networks, and Self-Organizing Maps. You'll also learn about the core principles of programming, data validation, automatic dataset preprocessing, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Court Tosses Surreptitious Video Recordings Holding Together Sketchy 'Human Trafficking' Investigation
In early 2019, law enforcement in Florida wrapped up a supposed "human trafficking" sting centering on Florida spas and massage parlors. By the time prosecutors and cops were done congratulating themselves for helping purge Florida of human trafficking, they appeared to have little more than about 150 bog-standard solicitation and prostitution arrests.But they did land a big fish. Robert Kraft -- the owner of the New England Patriots -- was one of the spa customers caught up in the sting. That was the biggest news. Evidence of actual trafficking never appeared, leaving law enforcement with a big name, a bunch of low-level arrests, and little else.What little law enforcement and prosecutors did have is now gone as well. Upholding a lower court's decision on video evidence captured by hidden cameras, a Florida state appeals court says everything captured on the government's secret cameras was illegally obtained. (via FourthAmendment.com)This conclusion was reached even though investigators obtained warrants for the cameras. Here's the backstory on the video recordings, taken from the decision [PDF]:
Trump Wants To Replace FTC Chair Whom He Can't Replace, Because The FTC Is Reluctant To Go After Trump's Social Media Enemies
A few weeks back we wrote about how FTC chair Joe Simons -- while bizarrely complaining about Section 230 blocking his investigations, despite it never actually doing that -- was actually willing to say that Trump's executive order on social media was nonsense (though not in those words). While the FCC caved and moved forward with its nonsense exploration of Section 230, the FTC has done nothing, because there's nothing for it to actually do.And apparently our narcissist in chief is upset about that. Politico reports that the White House has been interviewing possible replacements for Simons because they want someone who will punish Trump's mythical list of enemies among social media companies (even as those companies have bent over backwards to accommodate his nonsense):
Tinpot Administration Is Apparently 'Building Dossiers' On Journalists Who Criticize Trump
President Trump openly admires authoritarians. It appears he believes he was being elected dictator rather than president, and has been openly bitter about his perceived lack of power ever since. The world leaders he enjoys talking to most -- Vladimir Putin, Mohammad bin Salman, Recep Erdogan -- are all notorious thugs who punish critics, dissidents, and anyone else who steps a little out of line.Trump envies that power. He spends most of his phone time trying to impress a collection of international asshats. And he embarrasses himself (and us by proxy) when speaking about his favorite shitheels in public. Just recently, Trump spent part of his meeting with an American pastor recently freed from a Turkish prison praising the man who had put him there.
Federal Court Temporarily Extends The NYPD's Famous Opacity, Blocks Release Of Misconduct Records
The NYPD barely bothers to punish officers who misbehave. This "misbehavior" often includes violations of rights and extrajudicial killings, but it appears the NYPD feels New York's "finest" should be above reproach. Consequently, NYPD internal investigations often conclude no officers should be reproached, allowing them to remain the "finest" even when they're really the worst.A new wrinkle in the law fabric might change that. After years of doing nothing (and after years of the NYPD never bothering to invoke the law), the state repealed "50a," the statute that allowed the NYPD to withhold misconduct records from the public. For several years, the NYPD posted the outcome of internal investigations. Then it decided it was no longer going to do that. First, it blamed the high cost of printer ink. Then it cited the law that allowed it to stop posting reports where the press could access them.Lawsuits followed. And -- as is the case whenever law enforcement opacity is threatened -- the NYPD's unions have intervened. It was too little too late. An injunction was sought and obtained, but ProPublica -- which wasn't a party to the lawsuit over 50a records -- published what it had already received from the NYPD. But the battle continues because future opacity is at stake. Unfortunately, a federal court has decided opacity must win out for the moment.
SafeSpeed Executive Charged With Bribing Cook County Officials For Red Light Camera Contracts
In January of this year, we discussed how the Illinois Comptroller had decided to opt out of collecting red light camera fees for motorists ticketed by these automated revenue generators. Susan Mendoza said in a statement that while her office was taking this action due to the feds investigating the contractor for the cameras, a company called SafeSpeed, it was also her position that red light cameras were revenue generators with little efficacy at impacting public safety.All very true... but about that federal investigation.
ACLU Sues Federal Officers Over Excessive Force Deployed Against Portland Protesters
The Trump Administration's decision to send federal agents -- led by the DHS -- to Portland, Oregon to handle civil unrest (prompted by yet another killing of an unarmed Black man by a white police officer) continues to generate litigation.Supposedly sent to protect federal buildings targeted by Portland protesters, the DHS task force -- composed of CBP, ICE, and FPS officers -- rolled into Portland Gestapo-style, sending out unidentified officers to toss people into unmarked vehicles, spiriting them away to undisclosed locations to be subjected to detainment and interrogations that were never documented.The DHS task force redefined riot police to include rioting federal police. Officers attacked press and legal observers with the same enthusiasm they attacked protesters with. Local journalists sued, obtaining a restraining order against federal agents… one the federal agents immediately violated.Another lawsuit has been filed, this one accusing the DHS task force of violating the rights of protesters. The ACLU -- along with a number of other plaintiffs (including the "Black Millennial Movement") claims federal officers are deploying excessive force and engaging in unlawful detainments of participants in the ongoing Portland protests.The complaint [PDF] opens up with a nice little dig at the Administration's unwillingness to properly staff its departments, reminding the court (and readers) the DHS still doesn't have a legally appointed director.
Supreme Court To Courts And Federal Agencies Trying To Rewrite Section 230: Knock It Off
A version of this post appeared on Project Disco: What the Bostock Decision Teaches About Section 230.Earlier this summer, in Bostock v. Clayton County, Ga. the Supreme Court voted 6-3 in favor of an interpretation of Title VII of the Civil Rights Act that bars discrimination against LGBT people. The result is significant, but what is also significant – and relevant for this discussion here – is the analysis the court used to get there.What six justices ultimately signed onto was a decision that made clear that when a statute is interpreted, that interpretation needs to be predicated on what the statutory language actually says, not what courts might think it should say.
Fake 'Russian Hack' Of Public Michigan Voter Rolls Gets Absurdly Overhyped On The Interwebs
On Tuesday morning a story began making the rounds indicating that Russian hackers had somehow managed to hack into Michigan's election systems, gaining access to a treasure trove of voter data. Russian newspaper Kommersant was quick to proclaim that nearly every voter in Michigan -- and a number of voters in additional states -- had had their personal information compromised. The report was quickly parroted by other outlets including the Riga-based online newspaper Meduza, which insisted that the breach was simply massive:
Daily Deal: The Prestige Adobe Suite UI/UX Bundle
The Prestige Adobe Suite UI/UX Bundle will help you expand your design skills with over 100 hours of content on essential Adobe Suite programs. Courses cover Adobe XD, Photoshop, After Effects, Premiere Pro, HTML5, and CSS3. You'll learn how to build professional responsive websites, how to edit videos, how to animate your UI design, and much more. The bundle is on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Facebook Says It Will Block News Sharing In Australia If Murdoch's Social Media Tax Becomes Law
Earlier this year, regulators in Australia announced plans to tax Google and Facebook for sending traffic to news organizations, and then pay those news organization. The draft law literally names Google and Facebook and says that this law only impacts those two companies. The whole thing is bizarre. There are no pretenses here. It's just that old line media companies (many owned by Rupert Murdoch) are jealous of the success of Google and Facebook online, and seem to think they're magically owed money. And that's what the tax would do. It would force Google and Facebook to pay money for the awful crime of sending traffic to news sites without paying them.Nevermind that if they didn't want this traffic they could use robots.txt to block it. Nevermind that companies (including many of these media companies) hire SEO and social media experts to try to get more traffic. These companies feel so entitled to money that they feel that Facebook and Google need to pay them for sending traffic, just because.And Australian regulators seem to think this is a grand idea.A few weeks back Google posted an open letter to Australians noting that this would do a lot more harm than good, and other parts of the draft law would damage the quality of Google's search results (among other things, the law wouldn't let Google make changes to its search algorithms without giving media companies a 4-week notice, which is insane, given that Google tweaks its algorithm multiple times a day).Now Facebook has gone even further, and outright said that if this becomes law, it will no longer allow publishers to share news on its platform in Australia. This is the nuclear option -- similar to what Google did in Spain six years ago when Spain passed a similar law. In that case, Google waited until after the law went into effect to make the announcement and pull the plug.In this case, Facebook is firing a warning shot by saying that's exactly what it will do if this draft bill becomes law:
AT&T Now Trying To Ditch DirecTV After Bungled Merger Spree
It wasn't supposed to go this way.AT&T purchased DirecTV in 2015 for $67.1 billion (including debt). The company then gobbled up Time Warner in 2018 for a cool $86 billion. Together, these deals were supposed to cement AT&T as a dominant player in the video advertising wars to come. Instead, they created a convoluted mess that resulted in a mass exodus of pay TV subscribers. In fact, a combination of bungled integration, massive debt, price hikes, and confusing branding have resulted in AT&T losing 7 million subscribers since 2018. That's obviously not the kind of M&A fueled sector domination AT&T executives like Randall Stephenson (since "retired") envisioned.Now AT&T is reportedly trying to offload DirecTV entirely:
If A College Is Going To Make COVID-19 Contact Tracing Apps Mandatory, They Should At Least Be Secure
One of the more frustrating aspects of the ongoing COVID-19 pandemic has been the frankly haphazard manner in which too many folks are tossing around ideas for bringing it all under control without fully thinking things through. I'm as guilty of this as anyone, desperate as I am for life to return to normal. "Give me the option to get a vaccine candidate even though it's in phase 3 trials," I have found myself saying more than once, each time immediately realizing how stupid and selfish it would be to not let the scientific community do its work and do it right. Challenge trials, some people say, should be considered. There's a reason we don't do that, actually.And contact tracing. While contact tracing can be a key part of siloing the spread of a virus as infectious as COVID-19, how we contact trace is immensely important. Like many problems we encounter these days, there is this sense that we should just throw technology at the problem. We can contract trace through our connected phones, after all. Except there are privacy concerns. We can use dedicated apps on our phones for this as well, except this is all happening so fast that it's a damn-near certainty that there are going to be mistakes made in those apps.This is what Albion College in Michigan found out recently. Albion told students two weeks prior to on-campus classes resuming that they would be required to use Aura, a contact tracing app. The app collects a ton of real-time and personal data on students in order to pull off the tracing.
Appeals Court Says Not Allowing Federal Officers To Pepper Spray Journalists Makes Law Enforcement Too Difficult
The Ninth Circuit Appeals Court has just stripped away the protections granted to journalists and legal observers covering ongoing protests in Portland, Oregon. After journalists secured an agreement from local police to stop assaulting journalists and make them exempt from dispersal orders, the DHS's ad hoc riot control force (composed of CBP, ICE, and Federal Protective Services) showed up and started tossing people into unmarked vans and assaulting pretty much everyone, no matter what credentials they displayed. Shortly after that, a federal court in Oregon granted a restraining order forbidding federal agents from attacking journalists and observers.Not that granting the restraining order did much to prevent federal officers from beating journalists with batons, spraying them with pepper spray, or making sure they weren't left out of any tear gassings. The plaintiffs were soon back in court seeking sanctions against federal violators of the order. The DHS said it couldn't identify any of the officers and stated it had punished no one for violating the order. This prompted the judge to add more stipulations to the order, including the wearing of identification numbers by officers engaging in riot control.Unfortunately for journalists and legal observers, the restraining order is no longer in place. It was rolled back by the Appeals Court in a very short order [PDF] with the court finding that a blanket order protecting journalists and observers from being assaulted makes things too tough for federal cops. (via Courthouse News)
A Paean To Transparency Reports
One of the ideas that comes up a lot in proposals to change Section 230 is that Internet platforms should be required to produce transparency reports. The PACT Act, for instance, includes the requirement that they "[implement] a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized." And the execrable NTIA FCC petition includes the demand that the FCC "[m]andate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers."
Fighting Hate Speech Online Means Keeping Section 230, Not Burying It
At Free Press, we work in coalition and on campaigns to reduce the proliferation of hate speech, harassment, and disinformation on the internet. It’s certainly not an easy or uncomplicated job. Yet this work is vital if we’re going to protect the democracy we have and also make it real for everyone — remedying the inequity and exclusion caused by systemic racism and other centuries-old harms seamlessly transplanted online today.Politicians across the political spectrum desperate to “do something” about the unchecked political and economic power of online platforms like Google and Facebook have taken aim at Section 230, passed in 1996 as part of the Communications Decency Act. Changing or even eliminating this landmark provision appeals to many Republicans and Democrats in DC right now, even if they hope for diametrically opposed outcomes.People on the left typically want internet platforms to bear more responsibility for dangerous third-party content and to take down more of it, while people on the right typically want platforms to take down less. Or at least less of what’s sometimes described as “conservative” viewpoints, which too often in the Trump era has been unvarnished white supremacy and unhinged conspiracy theories.Free Press certainly aligns with those who demand that platforms do more to combat hate and disinformation. Yet we know that keeping Section 230, rather than radically altering it, is the way to encourage that. That may sound counter-intuitive, but only because of the confused conversation about this law in recent years.Preserving Section 230 is key to preserving free expression on the internet, and to making it free for all, not just for the privileged. Section 230 lowers barriers for people to post their ideas online, but it also lowers barriers to the content moderation choices that platforms have the right to make.Changes to Section 230, if any, have to retain this balance and preserve the principle that interactive computer services are legally liable for their own bad acts but not for everything their users do in real time and at scale.Powerful Platforms Are Still Powering Hate, and Only Slowly Changing Their WaysOnline content platforms like Facebook, Twitter and YouTube are omnipresent. Their global power has resulted in privacy violations, facilitated civil rights abuses, provided white supremacists and other violent groups a place to organize, enabled foreign-election interference and the viral spread of disinformation, hate and harassment.In the last few months some of these platforms have begun to address their role in the proliferation and amplification of racism and bigotry. Twitter recently updated its policies by banning links on Twitter to hateful content that resides offsite. That resulted in the de-platforming of David Duke, who had systematically skirted Twitter’s rules by linking to hateful content across the internet while following some limits for what he said on Twitter itself.Reddit also updated its policies on hate and removed several subreddits. Facebook restricted “boogaloo” and QAnon groups. YouTube banned several white supremacists accounts. Yet despite these changes and our years of campaigning for these kinds of shifts, hate still thrives on these platforms and others.Some in Congress and on the campaign trail have proposed legislation to rein in these companies by changing Section 230, which shields platforms and other websites from legal liability for the material their users post online. That’s coming from those who want to see powerful social networks held more accountable for third-party content on their services, but also from those who want social networks to moderate less and be more “neutral.”Taking away Section 230 protections would alter the business models of not just big platforms but every site with user-generated material. And modifying or even getting rid of these protections would not solve the problems often cited by members of Congress who are rightly focused on racial justice and human rights. In fact, improper changes to the law would make these problems worse.That doesn’t make Section 230 sacrosanct, but the dance between the First Amendment, a platform’s typical immunity for publishing third-party speech, and that same platform’s full responsibility for its own actions, is a complex one. Any changes proposed to Section 230 should be made deliberately and delicately, recognizing that amendments can have consequences not only unintended by their proponents but harmful to their cause.Revisionist History on Section 230 Can’t Change the Law’s Origins or Its VitalityTo follow this dance it’s important to know exactly what Section 230 is and what it does.Written in the early web era in 1996, the first operative provision in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”When a book or a newspaper goes to print, its publisher is legally responsible for all the words printed. If those words are plagiarized, libelous, or unlawful then that publisher may face legal repercussions. In the terms of Section 230, they are the law’s “information content provider[s]”.Wiping away Section 230 could revert the legal landscape to the pre-1996 status quo. That’s not a good thing. At the time, a pair of legal decisions had put into a bind any “interactive computer service” that merely hosts or transmits content for others. One case held a web platform that did moderate content could be sued for libel (just as the original speaker or poster could be) if that alleged libel slipped by the platform’s moderators. The other case held sites that did not moderate were not exposed to such liability.Before Section 230 became law, this pair of decisions meant websites were incentivized to go in one of two directions: either don’t moderate at all, tolerating not just off-topic comments but all kinds of hate speech, defamation, and harassment on their sites; or vet every single post, leading inexorably to massive takedowns and removal of anything that might plausibly subject them to liability for statements made by their users.The authors of Section 230 wanted to encourage the owners of websites and other interactive computer services, to curate content on their websites as these sites themselves saw fit. But back then that meant those websites could be just as responsible as newspapers for anything anyone said on their platforms if they moderated at all.In that state of affairs, someone like Mark Zuckerberg or Jack Dorsey would have the legal responsibility to approve every single post made on their services. Alternatively, they would have needed to take a complete, hands-off approach. The overwhelming likelihood is that under a publisher-liability standard those sites would not exist at all, at least not in anything like their present form.There’s an awful lot we’re throwing out with the bathwater if we attack not just the abuses of ad-supported and privacy-invasive social-media giants but all sites that allow users to share content on platforms they don’t own. Smaller sites likely couldn’t make a go of it at all, even if a behemoth like Facebook or YouTube could attempt the monumental task of bracing for potential lawsuits over the thousands of posts made every second of the day by their billions of users. Only the most vetted, sanitized, and anodyne discussions could take place in whatever became of social media. Or, at the other extreme, social media would descend into an unfiltered and toxic cesspool of spam, fraudulent solicitations, porn, and hate.Section 230’s authors struck a balance for interactive computer services that carry other people’s speech: platforms should have very little liability for third-party content, except when it violates federal criminal law and intellectual property law.As a result, websites of all sizes exist across the internet. A truly countless number of these — like Techdirt itself — have comments or content created by someone other than the owner of the website. The law preserved the ability of those websites, regardless of their size, to tend to their own gardens and set standards for the kinds of discourse they allow on their property without having to vet and vouch for every single comment.That was the promise of Section 230, and it’s one worth keeping today: an online environment where different platforms would try to attract different audiences with varying content moderation schemes that favored different kinds of discussions.But we must acknowledge where the bargain has failed too. Section 230 is necessary but not sufficient to make competing sites and viewpoints viable online. We also need open internet protections, privacy laws, antitrust enforcement, new models for funding quality journalism in the online ecosystem, and lots more.Taking Section 230 off the books isn’t a panacea or a pathway to all of those laudable ends. Just the opposite, in fact.We Can’t Use Torts or Criminal Law to Curb Conduct That Isn’t Tortious or CriminalHate and unlawful activity still flourish online. A platform like Facebook hasn’t done enough yet, in response to activist pressure or advertiser boycotts, to further modify its policies or consistently enforce existing terms of service that ban such hateful content.There are real harms that lawmakers and advocates see when it comes to these issues. It’s not just an academic question around liability for carrying third-party content. It’s a life and death issue when the information in question incites violence, facilitates oppression, excludes people from opportunities, threatens the integrity of our democracy and elections, or threatens our health in a country dealing so poorly with a pandemic.Should online platforms be able to plead Section 230 if they host fraudulent advertising or revenge porn? Should they avoid responsibility for facilitating either online or real-world harassment campaigns? Or use 230 to shield themselves from responsibility for their own conduct, products, or speech?Those are all fair questions, and at Free Press we’re listening to thoughtful proposed remedies. For instance, Professor Spencer Overton has argued forcefully that Section 230 does not exempt social-media platforms from civil rights laws, for targeted ads that violate voting rights and perpetuate discrimination.Sens. John Thune and Brian Schatz have steered away from a takedown regime like the automated one that applies to copyright disputes online, and towards a more deliberative process that could make platforms remove content once they get a court order directing them to do so. This would make platforms more like distributors than publishers, like a bookstore that’s not liable for what it sells until it gets formal notice to remove offending content.However, not all amendments proposed or passed in recent times have been so thoughtful, in our view, Changes to 230 must take the possibility of unintended consequences and overreach into account, no matter how surgical proponents of the change may think an amendment would be. Recent legislation shows the need for clearly articulated guardrails. In an understandable attempt to cut down on sex trafficking, a law commonly known as FOSTA (the “Fight Online Sex Trafficking Act”) changed Section 230 to make websites liable under state criminal law for the knowing “promotion or facilitation of prostitution.”FOSTA and the state laws it ties into did not precisely define what those terms meant, nor set the level of culpability for sites that unknowingly or negligently host such content. As a result, sites used by sex workers to share information about clients or even used for discussions about LGBTQIA+ topics having nothing to do with solicitation were shuttered.So FOSTA chilled lawful speech, but also made sex workers less safe and the industry less accountable, harming some of the people the law’s authors fervently hoped to protect. This was the judgment of advocacy groups like the ACLU that opposed FOSTA all along, but also academics who support changes to Section 230 yet concluded FOSTA’s final product was “confusing” and not “executed artfully.”That kind of confusion and poor execution is possible even when some of the targeted conduct and content is clearly unlawful. But, rewriting Section 230 to facilitate the take-down of hate speech that is not currently unlawful would be even trickier and fundamentally incoherent. Saying platforms ought to be liable for speech and conduct that would not expose the original speaker to liability would have a chilling impact, and likely still wouldn’t lead to sites making consistent choices about what to take down.The Section 230 debate ought to be about when it’s appropriate or beneficial to impose legal liability on parties hosting the speech of others. Perhaps this larger debate on the legal limits of speech should be broader. But that has to happen honestly and on its own terms, not get shoehorned into the 230 debate.Section 230 Lets Platforms Choose To Take Down HatePlatforms still aren’t doing enough to stop hate, but what they are doing is in large part thanks to having 230 in place.The second operative provision in the statute is what Donald Trump, several Republicans in Congress, and at least one Republican FCC commissioner are targeting right now. It says “interactive computer services” can “in good faith” take down content not only if it is harassing, obscene or violent, but even if it is “otherwise objectionable” and “constitutionally protected.”That’s what much hate speech is, at least under current law. And platforms can take it down thanks not only to the platforms’ own constitutionally protected rights to curate, but because Section 230 lets them moderate without exposing themselves to publisher liability as the pre-1996 cases suggested.That gives platforms a freer hand to moderate their services. It lets Free Press and its partners demand that platforms enforce their own rules against the dissemination of hateful or otherwise objectionable content that isn’t unlawful, but without tempting platforms to block a broader swath of political speech and dissent up front.Tackling the spread of online hate will require a more flexible multi-pronged approach that includes the policies recommended by Change the Terms, campaigns like Stop Hate for Profit, and other initiatives. Platforms implementing clearer policies, enforcing them equitably, enhancing transparency, and regularly auditing recommendation algorithms are among these much-needed changes.But changing Section 230 alone won’t answer every question about hate speech, let alone about online business models that suck up personal information to feed algorithms, ads, and attention. We need to change those through privacy legislation. We need to fund new business models too, and we need to facilitate competition between platforms on open broadband networks.We need to make huge corporations more accountable by limiting their acquisition of new firms, changing stock voting rules so people like Mark Zuckerberg aren't the sole emperors over these vastly powerful companies, and giving shareholders and workers more rights to ensure that companies are operated not just to maximize revenue but in socially responsible ways as well.Preserving not just the spirit but the basic structure of Section 230 isn’t an impediment to that effort, it’s a key part of it.Gaurav Laroia and Carmen Scurato are both Senior Policy Counsel at Free Press.
Hypocritical AT&T Makes A Mockery Of Itself; Says 230 Should Be Reformed For Real Net Neutrality
I regret to inform you that AT&T is at it again. For over a decade now, the company has had a weird infatuation with Google. It seems to truly hate Google and has long decided that anything bad for Google must be good for AT&T. Because Google was an early supporter of net neutrality -- a concept that AT&T (stupidly and incorrectly) seems to think is an existential threat to its own business plans of coming up with sneaky ways to spy on you and charge you more -- over a decade ago, AT&T started floating the lame idea that if it's to be held to "net neutrality" Google ought to be held to "search neutrality." Of course, there's a problem with that: there's no such thing as "search neutrality" because the whole point of search is to rank results for you. A "neutral" search would be a useless search that ranks nothing.However, now that the FCC (who knows better) caved in to the bumptious Trump demands to reinterpret Section 230 of the Communications Decency Act, AT&T stupidly (and self-destructively) has decided that it's going to comment against Section 230. AT&T top lobbyist Joan Marsh put up a truly spectacularly dumb blog post about how this is "the neutrality debate we need to have" (i.e., about Google and Facebook's treatment of content, rather than AT&T's treatment of network connections):
Daily Deal: The Ultimate PMP, Six Sigma, And Minitab Bundle
The Ultimate PMP, Six Sigma, and Minitab Bundle will help you hone your managerial and data analysis skills which are vital to effective project delivery. The courses cover Six Sigma white, yellow, green, and black belts, as well as graphical tools, control charts, and hypothesis testing in Minitab. There are also three courses on Lean project management. It's on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Collaboration Houses: How Technology & A Pandemic Have Created Entirely New Ways To Go To College
There has been plenty of talk about how technology has impacted how we live during the pandemic, but it's interesting to see how that's impacting things beyond the most obvious -- including some interesting cultural changes. Over in the NY Times, reporter Taylor Lorenz, who always has her finger on the beat of the most interesting cultural changes due to technology, has an article about college collaboration houses. That is, because so many colleges are remaining in distance learning to start the school year, thanks to the ongoing pandemic, students are recognizing that just because they don't need to be on campus, it doesn't mean they need to stay at home either:
Yet Another Study Shows U.S. 5G Is Far Slower Than Many Other Nations
Last May, a largely overlooked report by OpenSignal detailed how, despite endless hype, U.S. 5G is notably slower than 5G in most other developed countries. Because U.S. regulators failed to make mid-band spectrum (which offers faster speeds at greater range) widely available, many U.S. wireless carriers like Verizon embraced higher millimeter wave spectrum (which has trouble with range and building wall penetration) or low-band spectrum (which offers greater range but at notably reduced speeds). The result of the study was fairly obvious:A new updated report by OpenSignal didn't have any better news. According to the wireless network analysis firm, average 5G download speeds in the US is somewhere around 50 Mbps. And while that's certainly nothing to sneeze at, it's a far cry from carrier hype proclaiming 5G is somehow utterly revolutionary, and it's far from the 200-400 Mbps speeds being seen in many other countries:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, both our winners on the insightful side are anonymous commenters on our post about dismantling the police. In first place, it's some thoughts on where to start:
Get Your First & Fourth Emojiment Face Masks And Other Gear On Threadless
Get your First & Fourth Emojiment gear in the Techdirt store on Threadless »Earlier this week, we added two of our popular old designs to our line of face masks in the Techdirt store on Threadless: the First and Fourth Amendments, translated into the language of emojis. Both are available as standard and premium masks and in youth sizes, plus all kinds of other gear: t-shirts, hoodies, phone cases, notebooks, buttons, and much more.And if you haven't in a while, check out the Techdirt store on Threadles to see the other designs we have available, including classic Techdirt logo gear and our most popular design, Nerd Harder. The profits from all our gear help us continue our reporting, and your support is greatly appreciated!
...193194195196197198199200201202...