Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2026-01-14 00:17
Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)
Summary:Content moderation becomes even more difficult when you realize that there may be additional meaning to words or phrases beyond their most literal translation. One very clear example of that is the use of sarcasm, in which a word or phrase is used either in the opposite of its literal translation or as a greatly exaggerated way to express humor.In March of 2018, facing increasing criticism regarding certain content that was appearing on Twitter, the company did a mass purge of accounts, including many popular accounts that were accused of simply copying and retweeting jokes and memes that others had created. Part of the accusation for those that were shut down, was that there was a network of accounts (referred to as “Tweetdeckers” for the user of the Twitter application Tweetdeck) who would agree to mass retweet some of those jokes and memes. Twitter suggested that these retweet brigades were inauthentic and thus banned from the platform.In the midst of all of these suspensions, however, there was another set of accounts and content suspended, allegedly for talking about “self -harm.” Twitter has policies regarding glorifying self-harm which it had just updated a few weeks before this new round of bans.
FCC Formally Kills Rules That Would Have Brought Competition To The Cable Box
In early 2016, the cable industry quietly launched one of the most misleading and successful lobbying efforts in the industry's history. The target? A plan concocted by the former FCC that would have let customers watch cable TV lineups on third-party hardware. Given the industry makes $21 billion annually in rental fees thanks to its cable box hardware monopoly, the industry got right to work with an absolute wave of disinformation, claiming that the FCC's plan would put consumer data at risk, result in a "piracy apocalypse," and was somehow even racist (it wasn't).At one point, the industry even managed to grab the help of the US Copyright Office, which falsely claimed that more cable box competition would somehow violate copyright. Of course the plan had nothing to do with copyright, and everything to do with control, exemplifying once again that for the US Copyright Office, public welfare can often be a distant afterthought.Once in office, the Pai FCC dutifully got to work dismantling the Wheeler-era FCC proposal, coordinated with and justified by cable providers which promised their own "free market alternatives" would make the proposal irrelevant. More specifically, they promised that you'd be able to order Comcast or Spectrum's cable lineup through an app, making cable boxes irrelevant. But this promised alternative never showed up:
Addison Cain Really Doesn't Want You Watching This Video About Her Attempts To Silence Another Wolf Kink Erotica Author
Way back in the previous century of May, we wrote about a truly bizarre (but, not actually uncommon) story of someone abusing the DMCA to get a competitor's book disappeared. There was a lot of background, but the short version is that an author, who goes by the name Addison Cain (a pen name) and wrote a wolf-kink erotica book in the so-called "Omegaverse" realm (which is, apparently, a genre of writing involving wolf erotica and some tropes about the space), used the DMCA to get a competitor's book using similar tropes taken down. As we noted in our original article, both parties involved did some bad stuff. Cain was clearly abusing the DMCA to take down non-infringing works, while the person she was seeking to silence, going by the name Zoey Ellis (also a pen name) not only filed a (perfectly fine) DMCA 512(f) lawsuit in response, but also a highly questionable defamation lawsuit.There was a lot of back and forth in that story, but eventually the publisher, Blushing Books, agreed that there was no infringement and worked out some sort of settlement. Cain herself had been dismissed from the case on jurisdiction grounds. Anyway, last week, YouTuber Lindsay Ellis (no relation to Zoey Ellis, which, again, was a pen name) created a truly amazing one hour video analysis of the Cain/Ellis legal dispute that does a very good job covering many of the gory details of the dispute and how it eventually fizzled out. I know it's an hour of your time that is partly about wolf-kink erotica and partly about copyright law, but I still highly recommend it (though, maybe watch it at 2x speed):I mostly agree with the legal analysis, though I think she puts too much weight on the idea that a settlement has any precedential value (it doesn't).It appears that someone who does not want you to watch the video is Addison Cain / Rochelle Soto. Because soon after Ellis posted the video, she posted part of a legal threat from a lawyer claiming to represent Cain.
Could A Narrow Reform Of Section 230 Enable Platform Interoperability?
Perhaps the most de rigeur issue in tech policy in 2020 is antitrust. The European Union made market power a significant component of its Digital Services Act consultation, and the United Kingdom released a massive final report detailing competition challenges in digital advertising, search, and social media. In the U.S., the House of Representatives held an historic (virtual) hearing with the CEOs of Amazon, Apple, Facebook, and Google (Alphabet) on the same panel. As soon as the end of this month the Department of Justice is expected to file a “case of the century” scale antitrust lawsuit against Google. One competition policy issue that I’ve written about extensively is interoperability, and, while we’ve already seen significant proposals to promote interoperability, notably the 2019 ACCESS Act, I want to throw another idea into the hopper: I think Congress should consider amending Section 230 of the Communications Act to condition its immunity for large online intermediaries on the provision of an open, raw feed for independent downstream presentation.I know, I know. I can almost feel your fingers hovering over that big blue “Tweet” button or the “Leave a Comment” link -- but please, hear me out first.For those not already aware of (if not completely sick of) the active discussions around it, Section 230, originally passed as part of the Communications Decency Act, is an immunity provision within U.S. law intended to encourage internet services to engage in beneficial content moderation without fearing liability as a consequence of such action. It’s famously only 26 words long in its central part, so I’ll paste that key text in full: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”I’ll attempt to summarize the political context. Section 230 has come under intense, bipartisan criticism over the past couple of years as a locus of animosity related to a diverse range of concerns with the practices of a few large tech companies, in particular. Some argue that the choices made by platform operators are biased against conservatives; others argue that the platforms aren’t responsible enough and aren’t held sufficiently accountable. The support for amending Section 230 is substantial, although it is far from universal. The current President has issued an executive order seeking to catalyze change in the law; and the Democratic nominee has in the past bluntly called for it to be revoked. Members of Congress have introduced several bills that touch Section 230 (after the passage of one such bill, FOSTA-SESTA, in 2018), such as the EARN IT Act which would push internet companies to do more to respond to online child exploitation, to the point of undermining secure encryption. A perhaps more on-point proposal is the PACT ACT, which focuses on specific platform content practices; I’ve called it the best starting point for Section 230 reform discussions.Why is this one, short section of law so frequently used as a political punching bag? The attention goes beyond its hard law significance, revealing a deeper resonance in the modern-day notion of “publishing”. I believe this law in particular is amplified because the centralization and siloing of our internet experience has produced a widespread feeling (or reality) of a lack of meaningful user agency. By definition, social media is a business of taking human input (user generated content) and packaging it to produce output for humans, doubling the poignancy of human agency in some sense. The user agency gap spills over from the realm of competition, making it hard to evaluate content liability and privacy harms as entirely independent issues. In so many ways, the internet ecosystem is built on the idea of consumer mobility and freedom; also in so very many ways, that idea is bankrupt today.Yet debating whether online intermediaries for user content are “platforms” or “publishers” is a distraction. A more meaningful articulation of the underlying problem, I believe, is to say that we end users are unable to customize sufficiently the way in which the content is presented to us because we are locked into a single experience.Services like Facebook and YouTube operate powerful recommendation engines that are designed to sift through vast amount of potentially-desirable content and present the user with what they most value. This content is based on individual contextual factors such as what the user has been watching, and the broader signals of desirability such as engagement level from other users. As many critics allege, the underlying business model of these companies benefits by keeping users as engaged as possible, spending as much time on the platform as possible. That means recommending content that gets high engagement, even though human behavior doesn’t equate positive social value with high engagement (that’s the understatement of the day, there!).One of the interesting technical questions is how to design such systems to make them “better” from a social perspective. It’s the subject of academic research, in addition to ample industry investment. I’ve given YouTube credit in the past for offering some amount of transparency into changes it’s making (and the effects of those changes) to improve the social value of its recommendations, although I believe making that transparency more collaborative and systematic would help immensely. (I plan to expand on that in my next post!).Recommendation engines remain by and large black boxes to the outside world, including the users who receive their output. No matter how much credit you give individual companies for their efforts to balance properly their business model demands, optimal user experience, and improving social value, there are fundamental limits on users’ inability to customize, or replace, the recommendation algorithm that mediates the lion’s share of their interaction with the social network and the user-generated content that it hosts. We also can’t facilitate innovation or experimentation with presentation algorithms as things stand due to the lack of effective interoperability.And that’s why Section 230 gets so much attention -- because we don’t have the freedom to experiment at scale with things like Ethan Zuckerman’s Gobo.social project and thus improve the quality of, and better control, our social media experiences. Yes, there are filters and settings that users can change to customize their experience to some degree, likely far more than most people know. Yet, by design, these settings do not provide enough control to affect the core functioning of the recommendation engine itself.Thus, many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us -- choices that our limited downstream controls are insufficient to manage. That’s why it feels to some like they’re “publishing,” and doing a bad job of it at that. Despite massive investments by the service operators, it’s not hard to find evidence of poor outcomes of recommendations; see, e.g., YouTube recommending videos about upcoming civil war. And there are also occasional news stories of willful actions making things worse to add more fuel to the fire.So let’s create that space for empowerment by conditioning the Section 230 immunity on the provision of more raw, open access to their content experience so users can better control how to “publish” it to themselves by using an alternative recommendation engine. Here’s how to scale and design such an openness requirement properly:
Actual Facts Undercut Media's Narrative That Law Enforcement Task Force Broke Up A Multi-State Sex Trafficking Operation
If sex trafficking was actual traffic, people would rarely complain about congestion. It's not that it doesn't happen. It's that it doesn't happen with the frequency claimed by government officials in order to do things like dismantle Section 230 immunity or pursue baseless prosecutions against online ad services.But it always sounds like an omnipresent threat thanks to far too many news organizations who are apparently unwilling to challenge claims made by officials, much less dig into the details of trafficking stings. Almost without exception, big human/sex trafficking busts end with little to show for them but some standard solicitation arrests and a handful of jailed sex workers of legal age who haven't been "trafficked."There's a lot of blame to spread around for this turning from small-scale misguided hysteria into the focal point of legislation that harms the immunity granted to website and platform owners. But we can start with media, which hasn't met a sex trafficking story it isn't willing to hype, even when the facts don't jibe with the headlines. Michael Hobbes punches holes in the latest sex trafficking horror story covered nationwide -- one that contains very little horror and almost no sex trafficking.This is how it landed on people's virtual doorsteps following the government's press release:
Esports Milestone: Guild Esports Looks For London Stock Exchange Listing
For years now, we've covered various milestones the esports industry has hit as it has exploded in popularity. Once relegated primarily to a few overseas markets, the past decade has seen an acceleration of the industry hitting the mainstream, from features in sports media on participants, college scholarships for esports, IRL leagues getting in the game, and even the betting markets opening up to esports gambling. While this trend began long before the world's current predicament, it's also true that the COVID-19 pandemic, which shuttered live sports for months, acted as a supercharger for all of this.All of which contributed to the latest milestone the esports industry has managed to hit, as famed footballer David Beckham's Guild Esports franchise has announced it plans to get listed on the London Stock Exchange.
French Government To Make Insulting Mayors A Criminal Offense
French government entities continue to clamp down on speech. Following a terrorist attack on a French satirical newspaper, government leaders vowed to double down on protecting controversial speech. The govenment then fast-tracked several prosecutions under its anti-terrorism laws, which included arresting a comedian for posting some anti-semitic content. It further celebrated its embrace of free speech by arresting a man for mocking the death of three police officers.A half-decade later, that same commitment to protecting speech no one might object to continues. The country's government passed a terrible hate speech law that would have allowed law enforcement to decide what content was acceptable (and what was arrestable.) Fortunately for its citizens, the country's Constitutional Court decided the law was unlawful and struck down most of it roughly a month later.But that's not the end of bad speech laws in France. Government officials seem to have an unlimited amount of bad ideas. Some government officials are being hit with far more than objectionable words. Assaults of French mayors continue to occur at the rate of about once a day. Mayors assaulted and unassaulted have asked the French government to do more to protect them from these literal attacks.The government has responded. And it's not going to make mayors any more popular or make them less likely to be physically attacked.
If We're So Worried About TikTok, Why Aren't We Just As Worried About AdTech And Location Data Sales?
We've noted a few times how the TikTok ban is largely performative, xenophobic nonsense that operates in a bizarre, facts-optional vacuum.The biggest pearl clutchers when it comes to the teen dancing app (Josh Hawley, Tom Cotton, etc.) have been utterly absent from (or downright detrimental to) countless other security and privacy reform efforts. Many have opposed even the most basic of privacy rules. They've opposed shoring up funding for election security reform. Most are utterly absent when we talk about things like our dodgy satellite network security, the SS7 cellular network flaw exposing wireless communications, or the total lack of any meaningful privacy and security standards for the internet of broken things.As in, most of the "experts" and politicians who think banning TikTok is a good idea don't seem to realize it's not going to genuinely accomplish much in full context. Chinese intelligence can still glean this (and much more data) from a wide variety of sources thanks to our wholesale privacy and security failures on countless other fronts. It's kind of like banning sugary soda to put out a forest fire, or spitting at a thunderstorm to slow its advance over the horizon.Yet the latest case in point: Joseph Cox at Motherboard (who has been an absolute wrecking ball on this beat) discovered that private intel firms have been able to easily buy user location data gleaned from phone apps, allowing the tracking of users in immensely granular fashion:
If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
What should platforms like Facebook or YouTube do when users post speech that is technically legal, but widely abhorred? In the U.S. that has included things like the horrific video of the 2019 massacre in Christchurch. What about harder calls – like posts that some people see as anti-immigrant hate speech, and others see as important political discourse?Some of the biggest questions about potential new platform regulation today involve content of this sort: material that does not violate the law, but potentially does violate platforms’ private Terms of Service (TOS). This speech may be protected from government interference under the First Amendment or other human rights instruments around the world. But private platforms generally have discretion to take it down.The one-size-fits-all TOS rules that Facebook and others apply to speech are clumsy and unpopular, with critics on all sides. Some advocates believe that platforms should take down less content, others that they should take down more. Both groups have turned to courts and legislatures in recent years, seeking to tie platforms’ hands with either “must-remove” laws (requiring platforms to remove, demote, or otherwise disfavor currently lawful speech) or “must-carry” laws (preventing platforms from removing or disfavoring lawful speech).This post lays out what laws like that might actually look like, and what issues they would raise. It is adapted from my ”Who Do You Sue” article, which focuses on must-carry arguments.Must-carry claims have consistently been rejected in U.S. courts. The Ninth Circuit’s Prager ruling, for example, said that a conservative speaker couldn’t compel YouTube to host or monetize his videos. But must-carry claims have been upheld in Poland, Italy, Germany, and Brazil. Must-remove claims, which would require platforms to remove or disfavor currently legal speech on the theory that such content is uniquely harmful in the online environment, have had their most prominent airing in debates about the UK’s Online Harms White Paper.The idea that major, ubiquitous platforms that serve as channels for third party speech might face both must-carry and must-remove obligations is not new. We have long had such rules for older communications channels, including telephone, radio, television, and cable. Those rules were always controversial, though, and in the U.S. were heavily litigated.On the must-remove side, the FCC and other regulators have prohibited content in broadcast that would be constitutionally protected speech in a private home or the public square. On the must-carry side, the Supreme Court has approved some carriage obligations, including for broadcasters and cable TV owners.Those older communications channels were very different from today’s internet platforms. In particular, factors like broadcast “spectrum scarcity” or cable “bottleneck” power, which justified older regulations, do not have direct analogs in the internet context. But the Communications law debates remain highly relevant because, like today’s arguments about platform regulation, they focus on the nexus of speech questions and competition questions that arise when private entities own major forums for speech. As we think through possible changes in platform regulation, we can learn a lot from this history.In this post, I will summarize some possible regulatory regimes for platforms’ management of lawful but disfavored user content, like the material often restricted now under Terms of Service. I will also point out connections to Communications precedent.To be clear, many of the possible regimes strike me as both unconstitutional (in the U.S.) and unwise. But spelling out the options so we can kick the tires on them is important. And in some ways, I find the overall discussion in this post encouraging. It suggests to me that we are at the very beginning of thinking through possible legal approaches.Many models discussed today are bad ones. But many other models remain almost completely unexplored. There is vast and under examined territory at the intersection of speech laws and competition laws, in particular. Precedent from older Communications law can help us think that through. This post only begins to mine that rich vein of legal and policy ore.In the first section of this post, I will discuss five possible approaches that would change the rules platforms apply to their users’ legal speech. In the second (and to me, more interesting) section I will talk about proposals that would instead change the rulemakers – taking decisions out of platforms’ hands, and putting them somewhere else. These ideas are often animated by thinking grounded in competition policy.Changing the Rules
If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
What should platforms like Facebook or YouTube do when users post speech that is technically legal, but widely abhorred? In the U.S. that has included things like the horrific video of the 2019 massacre in Christchurch. What about harder calls – like posts that some people see as anti-immigrant hate speech, and others see as important political discourse?Some of the biggest questions about potential new platform regulation today involve content of this sort: material that does not violate the law, but potentially does violate platforms’ private Terms of Service (TOS). This speech may be protected from government interference under the First Amendment or other human rights instruments around the world. But private platforms generally have discretion to take it down.The one-size-fits-all TOS rules that Facebook and others apply to speech are clumsy and unpopular, with critics on all sides. Some advocates believe that platforms should take down less content, others that they should take down more. Both groups have turned to courts and legislatures in recent years, seeking to tie platforms’ hands with either “must-remove” laws (requiring platforms to remove, demote, or otherwise disfavor currently lawful speech) or “must-carry” laws (preventing platforms from removing or disfavoring lawful speech).This post lays out what laws like that might actually look like, and what issues they would raise. It is adapted from my ”Who Do You Sue” article, which focuses on must-carry arguments.Must-carry claims have consistently been rejected in U.S. courts. The Ninth Circuit’s Prager ruling, for example, said that a conservative speaker couldn’t compel YouTube to host or monetize his videos. But must-carry claims have been upheld in Poland, Italy, Germany, and Brazil. Must-remove claims, which would require platforms to remove or disfavor currently legal speech on the theory that such content is uniquely harmful in the online environment, have had their most prominent airing in debates about the UK’s Online Harms White Paper.The idea that major, ubiquitous platforms that serve as channels for third party speech might face both must-carry and must-remove obligations is not new. We have long had such rules for older communications channels, including telephone, radio, television, and cable. Those rules were always controversial, though, and in the U.S. were heavily litigated.On the must-remove side, the FCC and other regulators have prohibited content in broadcast that would be constitutionally protected speech in a private home or the public square. On the must-carry side, the Supreme Court has approved some carriage obligations, including for broadcasters and cable TV owners.Those older communications channels were very different from today’s internet platforms. In particular, factors like broadcast “spectrum scarcity” or cable “bottleneck” power, which justified older regulations, do not have direct analogs in the internet context. But the Communications law debates remain highly relevant because, like today’s arguments about platform regulation, they focus on the nexus of speech questions and competition questions that arise when private entities own major forums for speech. As we think through possible changes in platform regulation, we can learn a lot from this history.In this post, I will summarize some possible regulatory regimes for platforms’ management of lawful but disfavored user content, like the material often restricted now under Terms of Service. I will also point out connections to Communications precedent.To be clear, many of the possible regimes strike me as both unconstitutional (in the U.S.) and unwise. But spelling out the options so we can kick the tires on them is important. And in some ways, I find the overall discussion in this post encouraging. It suggests to me that we are at the very beginning of thinking through possible legal approaches.Many models discussed today are bad ones. But many other models remain almost completely unexplored. There is vast and under examined territory at the intersection of speech laws and competition laws, in particular. Precedent from older Communications law can help us think that through. This post only begins to mine that rich vein of legal and policy ore.In the first section of this post, I will discuss five possible approaches that would change the rules platforms apply to their users’ legal speech. In the second (and to me, more interesting) section I will talk about proposals that would instead change the rulemakers – taking decisions out of platforms’ hands, and putting them somewhere else. These ideas are often animated by thinking grounded in competition policy.Changing the Rules
FISA Court Decides FBI, NSA Surveillance Abuses Should Be Rewarded With Fewer Restrictions On Searching 702 Collections
A heavily-redacted opinion has been released by the FISA Court. Even with the redactions, it's clear the NSA and FBI have continued to abuse their Section 702 privileges. But rather than reject the government's arguments or lay down more restrictions on the use of these collections, the court has decided to amend the rules to make some of these abuses no longer abuses, but rather the new normal. This means there are now fewer protections shielding Americans from being swept up by the NSA collections or targeted using this data by the FBI.Elizabeth Goitein of the Brennan Center has a good rundown of the abuses and the court's response. She points out in her Twitter thread that some of this can be traced back to the reforms enacted by the USA Freedom Act, which codified some restrictions but didn't go far enough to prevent future abuses or mandate better reporting of rule breaking by these agencies.The opinion [PDF] notes the NSA found it too difficult to comply with a Section 702 requirement that at least one end of targeted communications involve someone outside of the United States. When faced with following this requirement and possibly losing access to communications it wanted, it simply chose to ignore the requirement.
Daily Deal: The Pro Photography And Photoshop 20 Course Bundle
Grab your camera, capture amazing photos, and learn to process them in Photoshop, Lightroom, and GIMP with the Pro Photography and Photoshop 20 Course Bundle. You'll learn how to take your camera skills to the next level, and then how to process those photos to truly capture the look you were envisioning. From layers and filters to levels and curves, you'll come to grips with essential photo editing concepts, and refine your skills over several courses. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
GOP Senators Release Latest Truly Stupid Section 230 Reform Bill; Would Remove 'Otherwise Objectionable'; Enable Spamming
Honestly, you'd think that the Senate might have a few more important things to be working on right now than introducing what has to be the... what... 8th bill to try to rewrite Section 230 of the Communications Decency Act this year? Either way, three Senators on the Commerce Committee have released yet another truly ridiculous attempt at reforming Section 230. Senators Roger Wicker, Lindsey Graham, and Marsha Blackburn are the three clueless Senators behind the ridiculously named "Online Freedom and Viewpoint Diversity Act."Before we dig deeper, I should remind you that Marsha Blackburn hates net neutrality with the passion of a thousand suns. Hell, she even put together this lovely video nearly a decade ago where she sings the praises of the open internet, and companies like "Facebook, YouTube, Twitter." And then she says: "There has never been a time that a consumer needed a federal bureaucrat to step in to intervene."So, anyway, federal legislator Marsha Blackburn, along with Senators Wicker and Graham have decided to "intervene" in order to attack Facebook, YouTube and Twitter, because those companies are moderating their private property in a way that these Senators don't like. It seems that they want... a bit more... what's the word I'm thinking of? Oh, right, "neutrality" in how content moderation works.Blackburn's press release quote is particularly hilarious after what she said about net neutrality:
Pai FCC Ignored Falsely Inflated Broadband Numbers To Pat Itself On The Back
We've noted more than once that the Donald Trump, Ajit Pai FCC isn't much for this whole accurate data thing. This FCC can routinely be found parroting inaccurate lobbyist claims on a wide variety of subjects, whether that's the rate of recent broadband investment, or the number of people just out of reach of affordable broadband. As such, it's not uncommon to find the FCC basing policy decisions on junk data; most recently exemplified by its rubber stamping of the job and competition eroding Sprint/T-Mobile merger (which was approved before FCC staff had seen ANY data).Last year, Pai's FCC tried to claim that the number of U.S. residents without access to fixed broadband (25Mbps downstream, 3Mbps upstream as per the FCC) dropped from 26.1 million people at the end of 2016 to 19.4 million at the end of 2017. Pai's agency attributed this improvement to the agency "removing barriers to infrastructure investment," which is code for gutting most meaningful consumer protections at lobbyist behest. But last year we noted that a good chunk of that improvement was not only thanks to policies Pai historically opposed (community fiber broadband networks and fiber build out conditions affixed to the 2015 AT&T DirecTV merger), but to administrative error.Consumer groups also pointed out that a big reason for that shift was a major false claim on the part of a smaller ISP named BarrierFree. BarrierFree had dramatically overstated its coverage areas in Form 477 data submitted to the FCC, resulting in broadband improvement numbers overstated by millions of Americans. In a follow up report this week, the FCC quietly acknowledged that the FCC was long aware of the "mistake" but published the falsely inflated numbers anyway:
The Government Has Been Binging On Classification. Senators Say It's Time To Start Purging.
Senators Ron Wyden and Jerry Moran have published an op-ed at Just Security detailing the government's overuse of classification (and distaste for declassification) -- a practice that uses our tax dollars to keep secrets from us. Overclassification is a problem. It has been a problem for decades, but it keeps getting worse. Multiple government agencies spend billions every year marking things "classified" and then forgetting the documents they've classified still exist.
Game Creator Has His YouTube Video Of Game Demonetized Over Soundtrack He Also Created
Content moderation, whether over social or intellectual property issues, is impossible to do well. It just is. The scale of content platforms means that automated systems have to do most of this work and those automated systems are always completely rife with avenues for error and abuse. While this goes for takedowns and copyright strikes, it is also the case for demonetization practices for the big players like YouTube.But how bad are these systems, really? Well, take, for instance, the case of a man who created a video game, and the soundtrack for that game, having his YouTube videos of the game demonetized due to copyright.
Prosecutor Who Used Bite Mark Analysis Even The Analyst Called 'Junk Science' Can Be Sued For Wrongful Jailing Of Innocent Woman
A lot of stuff that looks like science but hasn't actually been subjected to the rigors of the scientific process has been used by the government to wrongly deprive people of their freedom. As time moves forward, more and more of the forensic science used by law enforcement has been exposed as junk -- complicated-looking mumbo-jumbo that should have been laughed out of the crime lab years ago.Tire marks, bite marks, hair analysis… even the DNA "gold standard" has come under fire. If it's not the questionable lab processes, it's the testimony of government expert witnesses who routinely overstated the certainty of their findings.Bite mark analysis has long been considered junk science. But for a far longer period, it was considered good science -- a tool to be used to solve crimes and lock up perps. This case, handled by the Third Circuit Court of Appeals, contains an anomaly: the bite mark expert who helped wrongly convict a woman of murder -- taking away eleven years of her life -- actually stated on record that bite mark analysis is junk science.This case starts with some DNA testing. Supposedly, this is as scientific as it gets. But the prosecutor appeared to have wanted to pin this crime on Crystal Dawn Weimer. So investigators chose to ignore what the DNA evidence told them. Investigating the murder of Curtis Haith, who had been beaten and shot in the face, investigators started talking to party guests who had been at Haith's apartment the night before. They zeroed in on Weimer even when available evidence seemed to point elsewhere.From the decision [PDF]:
Astronomers Say Space X Astronomy Pollution Can't Be Fixed
We recently noted how the Space X launch of low orbit broadband satellites is not only creating light pollution for astronomers and scientists, but captured U.S. regulators, eager to try and justify rampant deregulation, haven't been willing to do anything about it. While Space X's Starlink platform will create some much needed broadband competition for rural users, the usual capacity constraints of satellite broadband mean it won't be a major disruption to incumbent broadband providers. Experts say it will be painfully disruptive to scientific study and research, however:While Space X says it's taking steps to minimize the glare and "photo bombing" capabilities of these satellites (such as anti-reflective coating on the most problematic parts of the satellites), a new study suggests that won't be so easy. The joint study from both the National Science Foundation's NOIRLab and the American Astronomical Society (AAS) found that while Space X light pollution can be minimized somewhat, it won't be possible to eliminate:
Trump Gets Mad That Twitter Won't Take Down A Parody Of Mitch McConnell; Demands Unconstitutional Laws
I'm still perplexed by Trumpian folks insisting that the President is a supporter of free speech (or the Constitution). It's quite clear that he's been a huge supporter of censorship over the years. The latest example is, perhaps, the most bizarre (while also being totally par for the course with regards to this President). For unclear reasons, the President has retweeted someone with fewer than 200 followers, who posted a picture of Senate Majority Leader Mitch McConnell in traditional Russian soldier garb... while complaining that Twitter won't take that image down, while it has "taken down" manipulated media from his supporters.The tweet says:
Government's 'Reverse' Warrant Rejected By Two Consecutive Federal Judges
The government doesn't always get what it wants. A novel twist on mass surveillance -- the so-called "reverse" warrant -- is becoming more popular now that law enforcement has realized Google maintains a stockpile of cell location data.Reverse warrants are just that: completely backwards. Cops don't have a suspect to target. All they have is a crime scene. Using location data allows them to work backwards to a list of suspects. Officers geofence an area around the crime scene and head to Google to ask for all information on cellphones in that area during the time the crime was committed. This treats everyone in the area as a suspect until investigators have had a chance to dig through their data to narrow down the list.Warrants are supposed to have a certain amount of particularity. These warrants have none. All they have are some coordinates and a clock. Fortunately, as the EFF reports, some judges are pushing back.
Daily Deal: The Hardcore Game Development And Animation Bundle
The Hardcore Game Development and Animation Bundle has 6 courses to help you learn how to create your own video games. You'll learn the basics of game design, of using Forager iOS, of character modeling for games, and more. Courses cover popular software programs for 3D game animation like Zbrush, PBR, Maya, Substance, Unity, and Unreal. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
White House Supposedly Blocked Walmart From Buying Tiktok Because It Would Prove Its Rationale For Forcing A Deal Was Bullshit
Among the rumors of who might take over TikTok (which the Trump administration is forcing ByteDance to sell) was the surprise entrant of Walmart. While we're still waiting for the official decision, a report last week noted that the White House stepped in to tell Walmart that couldn't happen if Walmart was to be the lead buyer:
FBI Horrified To Discover Ring Doorbells Can Tip Off Citizens To The Presence Of Federal Officers At Their Door
Ring's camera/doorbells may as well be branded with local law enforcement agency logos. Since Amazon acquired the company, Ring has cornered the law enforcement-adjacent market for home security products, partnering with hundreds of agencies to get Ring's products into the hands of residents. A lot of this flows directly through police departments, which can get them almost for free as long as they push citizens towards using Ring's snitch app, Neighbors, and allow Ring to handle the PR work.So, it's hilarious to find out the FBI is concerned about Ring cameras, considering the company's unabashed love for all things law enforcement. The Intercept -- diving back into the "Blue Leaks" stash of exfiltrated law enforcement documents -- has posted an FBI "Technical Analysis Bulletin" [PDF] warning cops about the threat Ring cameras pose to cops. After celebrating the golden age of surveillance the Internet of Things has ushered in, the FBI notes that doorbell cameras see everyone who comes to someone's door -- even if it's people who'd rather the absent resident remained unaware of.
New Gear For Section 230 Fans: Otherwise Objectionable
Get your Otherwise Objectionable gear in the Techdirt store on Threadless »If Section 230(c)(1) contains "the twenty-six words that created the internet", then (c)(2) contains the words that gave them some critical help. Among those words are two that are especially important, "otherwise objectionable", as they turn a limited list of specific content that can be removed into an open-ended protection for platform operators to moderate as they choose — and now you can wear them proudly with our new gear on Threadless.As usual, there's a wide variety of gear available in this and other designs — including t-shirts, hoodies, notebooks, buttons, phone cases, mugs, stickers, and of course the now-standard face masks. Check out all our designs and items in the Techdirt store on Threadless!
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, both our winners on the insightful side are folks expressing their doubt about our Greenhouse guest post on thoughtfully regulating the internet. In first place, it's an anonymous commenter focusing on the various interests at play:
This Week In Techdirt History: August 30th - September 5th
Five Years AgoThis week in 2015, the NSA was renewing its bulk records collection after a worrying and slightly suspicious court ruling. The FBI was somehow using Hurricane Katrina as an excuse to get more Stingray devices, just before the Wall Street Journal got a "win" (though the devil was in the details) in a lawsuit related to Stingray surveillance orders, and the DOJ told federal agents that they need warrants to use the devices. Meanwhile, the NYPD was volunteering to be copyright cops in Times Square, Sony was downplaying the damage done by the same hack it was hyping up before, and the entertainment industry was freaking out about Popcorn Time.Ten Years AgoThis week in 2010, we were saddened to see the US Commerce Secretary siding with the RIAA and telling ISPs to become copyright cops, even as more ISPs were stepping up to fight subpoenas from the US Copyright Group (and in France, some ISPs were fighting back against Hadopi, which was also becoming a tool of scammers). One court refused to dismiss a Righthaven lawsuit involving a copyright that was bought after the alleged infringement happened, while another court was seeking ways to minimize a Righthaven win with minuscule damages — and the LVRJ was defending the Righthaven suits and mocking a competitor for criticizing them.Fifteen Years AgoThis week in 2005, we were pleased to see that the judge in one of the first instances of someone fighting back against RIAA lawsuits seemed to recognize the issues, and less pleased to see another court give its assent to yet another form of DMCA abuse. It wasn't as crazy as what was happening in India, though, where it appeared that their equivalent of the MPAA got an open search warrant for the entire city of New Delhi to look for pirated movies. And even that didn't match the panic over mobile porn that was gripping parts of the world, leading to things like Malaysian police performing random porn spot-checks on people's phones.
Students, Parents Figure Out School Is Using AI To Grade Exams And Immediately Game The System
With the COVID-19 pandemic still working its way through the United States and many other countries, we've finally arrived at the episode of this apocalypse drama where school has resumed (or will be shortly) for our kids. It seems that one useful outcome of the pandemic, if we're looking for some kind of silver lining, is that it has put on full display just how inept we are as a nation in so many ways. Federal responses, personal behavior, our medical system, and our financial system are all basically getting failing grades at every turn.Speaking of grades, schools that are now trying to suddenly pull off remote learning for kids are relying on technology to do so. Unfortunately, here too we see that we simply weren't prepared for this kind of thing. Aside from all of the other complaints you've probably heard or uttered yourselves -- internet connections are too shitty for all of this, teachers aren't properly trained for distance learning, the technology being handed out by schools mostly sucks -- we can also add to that unfortunate attempts by school districts to get AI to grade exams.This story begins with a parent seeing her 12 year old son, Lazare Simmons, fail a virtual exam. Taking an active role, Dana Simmons went on to watch her son complete more tests and assignments using the remote learning platform the school had set students up on, Edgenuity. While watching, it became quickly apparent how the platform was performing its scoring function.
Content Moderation Case Studies: Stopping Malware In Search Leads To Unsupported Claims Of Bias (2007)
Summary:As detailed in a thorough oral history in Wired back in 2017, it’s hard to overstate the importance of Google’s Safe Browsing blocklist effort that began as a project in 2005, but really launched in 2007. The effort was in response to a recognition that there were malicious websites out there that were attempting to trick people into visiting in order to install various forms of malware. Google’s Safe Browsing list, and its corresponding API (used by pretty much every other major browser, including Safari, Firefox and more) has become a crucial part of stopping people from being lured to dangerous websites that may damage or compromise their computers.Of course, as with any set of filters and blocklists, questions are always raised about the error rate, and whether or not you have too many false positives (or false negatives). And, not surprisingly, when sites are added to the blocklist, many website operators become upset. Part of the problem was that, all too often, the websites had become compromised without the operator knowing about it -- leading them to claim they were falsely being blocked. From the oral history:
The Next Register Of Copyrights Must Realize That Copyright Serves The Public
Mike has written many times on this website about various shenanigans at the Copyright Office. An obscure government agency to many, the Copyright Office actually has a huge influence over copyright policy and law, from Congress to the courts. With word that the appointment of a new Register of Copyrights is imminent, this is an opportunity to fix many of the challenges with the agency.The Copyright Office was originally established as part of the Library of Congress to register works back when formal registration with the Office was required under Statute to receive copyright protection This registration requirement was created as a way to get a deposit copy for the Library of these works. The goal was to not only have copies for recordation purposes, but to create a vast library.However, over the last 50 years, the role of the Copyright Office has greatly changed with the law. The formal registration requirement was ended, first requiring only that published works contain a copyright notice and then eventually expanding federal copyright protection to all works, published and unpublished, once they are fixed in a tangible form. Today registration with the Office is not required, but does provide certain statutory defined benefits. The Office was also given more and more copyright policy and law responsibilities. The result is that the Office has become much more of a policy and regulatory quasi-agency instead of its original role as part of a library and place to register works for federal copyright protection.The Register of Copyrights runs the Copyright Office. This is an outdated title, as while you still can register works at the Office, the role of the Register is much more to provide policy and legal expertise to the rest of government on copyright, overseeing the DMCA 1201 triennial review, and additional to other important roles. This need means that the Office attracts many copyright attorneys and policy experts. Unfortunately, it has been a long time since the Register was not previously an attorney for traditional rights holders, and they often go back to work for traditional rightsholders after they leave government service. The last two heads are now the General Counsel of the Motion Picture Association and the head of the American Association of Publishers. A former Register was reportedly fired for not properly administering the basic functions of the office, gross negligence in the stewardship of taxpayer dollars and lying about this because all she cared about was fighting for traditional rightsholders via the policy side of the office consolidating power by separating the office from the Library. Most of the senior staff also move on to jobs representing traditional rightsholders (with just a few exceptions) after their time at the office.The Copyright Office is seen by many as the lead on U.S. copyright policy, advising the government on everything from approaches to appellate court cases and trade agreement language to making suggestions on changes to Section 512 of the DMCA in its recent report. Based on another recommendation from the Office, members of Congress are trying to pass the CASE Act to hand over much of the judicial function in enforcing copyright law to the Office to be decided in quasi-judicial proceedings. This is especially bad because the Office’s 512 report was basically an attack on how the courts are getting DMCA wrong almost every time they decide against rightsholders. How can we trust the Office to follow what current law is based on these court decisions, when they have openly rejected these decisions?The Copyright Office has seen itself as an advocate for traditional rightsholders for most of the last 50 years in its new and expanded policy and regulatory role. Former Register Maria Pallante made this point clear in testimony before the Senate Judiciary Intellectual Property Subcommittee:
Intermediary Liability And Responsibilities Post-Brexit
This is a peculiar time to be an English lawyer. The UK has one foot outside the EU, and (on present intentions) the other foot will join it when the current transitional period expires at the end of 2020.It is unclear how closely tied the UK will be to future EU law developments following any trade deal negotiated with the EU. As things stand, the UK will not have to implement future EU legislation, and is likely to have considerable freedom in many areas to depart from existing EU legislation.The UK government has said that it has no plans to implement the EU Copyright Directive adopted in April 2019. Nor does it seem likely that it would have to follow whatever legislation may result from the European Commission's proposals for an EU Digital Services Act. Conversely, the government has also said that it has no current plans to change the existing intermediary liability provisions of the EU Electronic Commerce Directive, or the Directive's approach to prohibition of general monitoring obligations.Looking across the Atlantic, there is the prospect of a future trade agreement between the UK and the USA. That has set off alarm bells in some quarters that the US government will want the UK to adopt an intermediary liability shield modeled on S.230 Communications Decency Act.Domestically, the UK government is developing its Online Harms plans. The proposed legislation would impose a legal duty on user generated content-sharing intermediaries and search engines to prevent or inhibit many varieties of illegal or harmful UGC. Although branded a duty of care, the proposal is more akin to a broadcast-style content regulatory regime than to a duty of care as a tort lawyer would understand it. The regime would most likely be managed and enforced by the current broadcast regulator, Ofcom. As matters stand the legislation would not define harm, leaving Ofcom to decide (subject to some specific carve-outs) what should be regarded as harmful.All this is taking place against the background of the techlash. This is not the place to get into the merits and demerits of that debate. The aim of this piece is to take an educational ramble around the UK and EU legal landscape, pausing en route to inspect and illuminate some significant features.Liability Versus ResponsibilitiesThe tour begins by drawing a distinction between liability and responsibilities.;In the mid-1990s the focus was mostly on liability: the extent to which an intermediary can be held liable for unlawful activities and content of its users. The US and EU landmarks were S.230 CDA 1996 and S.512 DMCA 1998 (USA), and Articles 12 to 14 of the Electronic Commerce Directive 2000 (EU).Liability presupposes the user doing something unlawful on the intermediary's platform. (Otherwise, there is nothing for the intermediary to be liable for.) The question is then whether the platform, as well as the user, should be made liable for the user's unlawful activity – and if so, in what circumstances. The risk (or otherwise) of potential liability may encourage the intermediary to act in certain ways. Liability regimes incentivise, but do not mandate.Over time, the policy focus has expanded to take in responsibilities: putting an intermediary under a positive obligation to take action in relation to user content or activity.A mandatory obligation to prevent users behaving in particular ways is different from being made liable for their unlawful activity. Liability arises from a degree of involvement in the primary unlawful activity of the user. Imposed responsibility does not necessarily rest on a user's unlawful behavior. The intermediary is placed under an independent, self-standing obligation – one that it alone can breach.Responsibilities Imposed By Court OrdersResponsibilities first manifested themselves as mandatory obligations imposed on intermediaries by specific court orders, but still predicated on the existence of unlawful third party activities.In the US this development withered on the vine with SOPA/PIPA in 2012. Not so in the EU, where copyright site blocking injunctions can be (and have often been) granted against internet service providers under Article 8(3) of the InfoSoc Directive. The Intellectual Property Enforcement Directive requires similar injunctions to be available for other IP rights. In the UK it is established that a site blocking injunction can be granted based on registered trade marks, and potentially in respect of other kinds of unlawful activity.Limits to the actions that court orders can oblige intermediaries to take in respect of third party activities have been explored in numerous cases: amongst them, at EU Court of Justice level, detection and filtering of copyright infringing files in SABAM v Scarlet and SABAM v Netlog; detection and filtering of equivalent defamatory content in Glawischnig-Piesczek v Facebook; and worldwide delisting in Glawischnig-Piesczek v Facebook.Such court orders tend not to be conceptualized in terms of remedying a breach by the intermediary. Rather, they are based on efficiency: the intermediary, as a choke point, should be co-opted as being in the best position to reduce unlawful activity by third parties. In UK law at least, the intermediary has no prior legal duty to assist – only to comply with an injunction if the court sees fit to grant one.Responsibilities Imposed by Duties Of CareMost recently the focus on intermediary responsibilities has broadened beyond specific court orders. It now includes the idea of a prior positive obligation, imposed on an intermediary by the general law, to take steps to reduce risks arising from user activities on the platform.This kind of obligation, frequently labelled a duty of care, is contemplated by the UK Online Harms proposals and may form part of a future EU Digital Services Act.In the form in which it has been adapted for the online sphere, a duty of care would impose positive obligations on the intermediary to prevent users from harming other users (and perhaps non-users). Putting aside the vexed question of what constitutes harm in the context of online speech, a legal responsibility to prevent activities of third parties is far from the norm. A typical duty of care is owed in respect of someone's own acts, not to prevent acts of third parties.Although conceptually distinct from liability, an intermediary duty of care can interact and overlap with it. For example, a damages claim framed as breach of a duty of care may in some circumstances be barred by the ECD liability shields. In McFadden the rightsowner sought to hold a Wi-Fi operator liable for damages in respect of copyright infringement by users, founded on an allegation that the operator had breached a duty to secure its network. The CJEU found that the claim for damages was precluded by the Article 12 conduit shield, even though the claim was framed as breach of a duty rather than as liability for the users' copyright infringement as such.At the other end of the spectrum, the English courts have held that if a regulatory sanction is sufficiently remote from specific user infringements as not to be in respect of those infringements, the sanction is not precluded by the ECD liability shields. The UK Online Harms proposals suggest that sanctions would be for breach of systemic duties, rather than penalties tied to failure to remove specific items of content.Beyond UnlawfulnessAlthough intermediary liability is restricted to unlawfulness on the part of the user, responsibility is not. A self-standing duty of care is concerned with risk of harm. Harm may include unlawfulness, but is not limited to that.The scope of such a duty of care depends critically on what is meant by harm. In English law, comparable offline duties of care are limited to objectively ascertainable physical injury and damage to physical property. The UK Online Harms proposals jettison that limitation in favor of undefined harm. Applied to lawful online speech, that is a subjective concept. As matters stand Ofcom, as the likely regulator, would in effect decide what does and does not constitute harm.Article 15 ECommerce DirectiveA preventative duty of care takes us into the territory of proactive monitoring and filtering. Article 15 ECD, which sits alongside the liability scheme enacted in Articles 12 to 14, prohibits Member States from imposing two kinds of obligation on conduits, caches or hosts: a general obligation to monitor information transmitted or stored, and a general obligation actively to seek facts or circumstances indicating illegal activity.Article 15 does not on its face prohibit an obligation to seek out lawful but harmful activity, unless it constitutes a general obligation to monitor information. But in any event, for an EU Member State the EU Charter of Fundamental Rights would be engaged. The CJEU found the filtering obligations in Scarlet and Netlog to be not only in breach of Article 15, but also contrary to the EU Charter of Fundamental Rights. For a non-EU state such as the UK, the European Convention on Human Rights would be relevant.So far, the scope of Article 15 has been tested in the context of court orders. The principles established are nevertheless applicable to duties of care imposed by the general law, with the caveat that Recital (48) permits hosts to be made subject to "duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities." What those "certain types" might be is not stated. In any event the recital does not on the face of it apply to lawful activities deemed to be harmful.The Future Post-BrexitBoth the UK and the EU are currently heading down the road of imposing responsibilities on intermediaries, while professing to leave the liability provisions of the ECD untouched. That is conceptually possible for some kinds of responsibilities, but difficult to navigate in practice. Add the prohibition on general monitoring obligations and the task becomes harder, especially if the prohibition stems not just from the ECD (which could be diluted in future legislation) but from the EU Charter of Fundamental Rights and the ECHR.The French Loi Avia, very much concerned with imposing responsibilities, was recently partially struck down by the French Constitutional Council. Whilst no doubt it will return in a modified form, it is nevertheless a salutary reminder of the relevance of fundamental rights.As for UK-US trade discussions, Article 19.17 of the US-Mexico-Canada Agreement has set a precedent for inclusion of intermediary liability. Whether the wording of Article 19.17 really does mandate full S.230 immunity, as some have suggested, is another matter. Damian Collins MP, asking a Parliamentary Question on 2 March 2020, said:
E-Voting App Maker Voatz Asks The Supreme Court To Let It Punish Security Researchers For Exposing Its Flaws
Voatz has decided to weigh in on a Supreme Court case that could turn a lot of normal internet activity into a federal crime. At the center of this CFAA case is a cop who abused his access privileges to run unauthorized searches of law enforcement databases. The end result -- after a visit to the Eleventh Circuit Court of Appeals -- was a CFAA conviction for violating the system's terms of use.That's why this case is important. If the CFAA is interpreted this broadly, plenty of people become criminals. And it won't just be security researchers risking criminal charges simply by performing security research. It will also be everyone who lies to social media services about their personal info. Lawprof Orin Kerr's brief to the Supreme Court points out what a flat "no unauthorized use" reading would do to him.
America Needs To Stop Pretending The Broadband 'Digital Divide' Isn't The Direct Result Of Corruption
Last week, a tweeted photo of two kids huddled on the ground outside of a Taco Bell -- just to gain access to a reliable internet connection -- made the rounds on social media. The two found themselves on the wrong side of the "digital divide," forced to sit in the dirt just to get online, just 45 minutes from the immensely wealthy technology capital of the United States:
Another Florida Appeals Court Says Compelled Passcode Production Violates The Fifth Amendment
Things are getting pretty unsettled in Florida in terms of compelling the production of phone passcodes. Less than a half-decade ago, refusing to produce passwords netted people contempt charges. As these cases moved forward through the court system, the legal calculus changed. As it stands now, state appeals courts in two Florida districts have found that forcing people to give up passcodes violates the Fifth Amendment. But there's still some settling left to do and the First District has asked the state's top court to take a look at the issue.The latest development comes from Florida's Fifth District, where another state appeals court has reached the same conclusion as the others: passcodes are testimonial, and forcing people to turn them over implicates the Fifth Amendment. (via FourthAmendment.com)The case deals with some targeted vandalism and alleged stalking. Investigators feel the phone they found at the crime scene belongs to the suspect and contains evidence to support the aggravated stalking charges. (The victim also apparently found a GPS device attached to her car, presumably placed there by the suspect.)The decision [PDF] recounts the state's bizarre argument at the trial court level -- one that claimed demanding a passcode from the suspect was not an "intrusion."
Sony May Just Be Loosening The Reins As Gaming Brings In A Plurality Of Its Revenue
Any trip down Techdirt's memory lane when it comes to Sony is not going to leave you with a good taste in your mouth. This is a company that has been almost comically protective of all things intellectual property, engaged in all manner of anti-consumer behavior, and is arguably most famous for either using an update to remove features from its gaming console that generated sales of that console or for installing rootkits on people's computers. When it comes to any positive stories about the company, in fact, they mostly have to do with the immense success Sony had in the most recent Console Wars with its PlayStation 4 device.Positive results and gaming aren't a crosstab of coincidence for Sony, it seems. There are couple of converging stories about Sony, one dealing with its revenue and another with its plans for its gaming divisions opening up a bit, that point to positive developments. To set the stage, let's start with the fact that the video game industry is now the biggest revenue generator for Sony.
Sony May Just Be Loosening The Reins As Gaming Brings In A Plurality Of Its Revenue
Any trip down Techdirt's memory lane when it comes to Sony is not going to leave you with a good taste in your mouth. This is a company that has been almost comically protective of all things intellectual property, engaged in all manner of anti-consumer behavior, and is arguably most famous for either using an update to remove features from its gaming console that generated sales of that console or for installing rootkits on people's computers. When it comes to any positive stories about the company, in fact, they mostly have to do with the immense success Sony had in the most recent Console Wars with its PlayStation 4 device.Positive results and gaming aren't a crosstab of coincidence for Sony, it seems. There are couple of converging stories about Sony, one dealing with its revenue and another with its plans for its gaming divisions opening up a bit, that point to positive developments. To set the stage, let's start with the fact that the video game industry is now the biggest revenue generator for Sony.
Appeals Court Says Address Mistakes On Warrants Are Mostly Harmless, Not Worth Getting Excited About
In a case involving a drug bust utilizing a warrant with erroneous information, the Sixth Circuit Court of Appeals had this to say [PDF] about the use of boilerplate language and typographical errors:
It's Time To Regulate The Internet... But Thoughtfully
The internet policy world is headed for change, and the change that’s coming isn’t just a matter of more regulations but, rather, involves an evolution in how we think about communications technologies. The most successful businesses operating at what we have, up until now, called the internet’s “edge” are going to be treated like infrastructure more and more. What’s ahead is not exactly the “break them up” plan of the 2019 Presidential campaign of Senator Warren, but something a bit different. It’s a positive vision of government intervention to generate an evolution in our communications infrastructure to ensure a level playing field for competition; meaningful choices for end users; and responsibility, transparency, and accountability for the companies that provide economically and socially valuable platforms and services.We’ve seen evolutions in our communications infrastructure a few times before: first, when the telephone network became infrastructure for the internet protocol stack; again when the internet protocol stack became infrastructure for the World Wide Web; and then again when the Web became infrastructure on which key “edge” services like search and social media were built. Now, these edge services themselves are becoming infrastructure. And as a consequence, they will increasingly be regulated.Throughout its history, the “edge” of the internet sector has - for the most part - always enjoyed a light regulatory yoke, particularly in the United States. Many treated the lack of oversight as a matter of design, or even as necessarily inherent, given the differences between the timetables and processes of technology innovation and legislation. From John Perry Barlow’s infamous “Declaration of the Independence of Cyberspace” to Frank Easterbrook’s “Cyberspace and the Law of the Horse” to Larry Lessig’s “Code is law,” an entire generation of thinkers were inculcated in the belief that the internet was too complex to regulate directly (or too critical, too fragile, or, well, too “something”).We didn’t need regulatory change to catalyze the prior iterations of the internet’s evolution. The phone network was already regulated as a common carrier service, creating ample opportunity for edge innovation. And the IP stack and the Web were built as fully open standards, structurally designed to prevent the emergence of vertical monopolies and gatekeeping behavior. In contrast, from the get-go, today’s “edge” services have been dominated by private sector companies, a formula that has arguably helped contribute to their steady innovation and growth. At the same time, limited government intervention results in limited opportunity to address the diverse harms facing internet users and competing businesses.
Ninth Circuit Says NSA's Bulk Phone Records Collection Was Illegal, Most Likely Unconstitutional
The NSA's bulk phone records collection is dead. It died of exposure. And reform. It was Ed Snowden's first leak back in 2013. A few years later, a reform bill prompted by Snowden's leaks revamped the program, forcing the NSA to tailor its requests for phone records from telcos. The NSA used to collect everything and sort through at its leisure. But once the program eliminated the "bulk" from the NSA's bulk collection, the NSA couldn't figure out how to obtain records without getting more than it was legally allowed to take.This recent courtroom win may have come a bit too late to matter much. But it's still a big win. In a case involving material support for terrorists by Somali citizens living in the United States, the Ninth Circuit Court of Appeals has arrived at the conclusion that the NSA's bulk phone records collection is/was illegal.Here's the short summary from the court [PDF]:
Daily Deal: The Notion App Course
Notion is an all-in-one workspace for organizing your life. You can use it for managing tasks, studying, projects, notes, hobbies, and life goals. The Notion App Course will show you how to become more focused, organized and productive using the Notion app. It alsoincludes links to templates on life planning and getting things done that you can clone and personalize. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Copia Institute's Comment To The FCC Regarding The Ridiculous NTIA Petition To Reinterpret Section 230
In his post Mike called the NTIA petition for the FCC to change the enforceable language of Section 230 laughable. Earlier I called it execrable. There is absolutely nothing redeeming about it, or Trump's Executive Order that precipitated it, and it has turned into an enormous waste of time for everyone who cares about preserving speech on the Internet because it meant we all had to file comments to create the public record that might stop this trainwreck from causing even more damage.Mike's post discusses his comment. He wrote it from the standpoint of a small businessman and owner of a media website that depends on Section 230 to enable its comment section, as well as help spread its posts around the Internet and took on the myth that content moderation is something that should inspire a regulatory reaction.I also filed one, on behalf of the Copia Institute, consistent with the other advocacy we've done, including on Section 230. It was a challenge to draft; the NTIA petition is 57 pages of ignorance about the purpose and operation of the statute. There was so much to take issue with it was hard to pick what to focus on. But among the many misstatements the most egregious was its declaration on page 14 that:
AT&T Is Astroturfing The FCC In Support Of Trump's Dumb Attack On Social Media
We've noted for a long time that telecom giants like Comcast and AT&T have been pushing (quite successfully) for massive deregulation of their own monopolies, while pushing for significant new regulation of the Silicon Valley giants whose ad revenues they've coveted for decades. As such, it wasn't surprising to see AT&T come out with a incredibly dumb blog post this week supporting Trump's legally dubious and hugely problematic executive order targeting social media giants. You know, the plan that not only isn't enforceable by the agencies supposedly tasked with enforcing it (the FCC), but that also risks creating a massive new censorship paradigm across the entire internet.As Mike already noted, AT&T's post was a pile of bad faith nonsense, weirdly conflating net neutrality with the ham-fisted attack on Section 230. AT&T just got done deriding the FCC's relatively modest net neutrality rules as "government interference in the internet run amok." Yet here it is, advocating for a terrible plan that attempts to shovel the FCC into the role of regulating speech on social media, authority it simply doesn't have. For those that tracked the net neutrality fight, the intellectual calisthenics required here by folks like AT&T and its favorite FCC officials have been stunning, even for Trumpland:
Academic Study Says Open Source Has Peaked: But Why?
Open source runs the world. That's for supercomputers, where Linux powers all of the top 500 machines in the world, for smartphones, where Android has a global market share of around 75%, and for everything in between, as Wired points out:
Animal Crossing Continues To Be An Innovative Playground As Biden Campaign Begins Advertising On It
For nearly half a year now, especially when this damned pandemic really took off, we've been bringing you the occasional story of how Nintendo's Animal Crossing keeps popping up with folks finding innovative ways to use the game as a platform. Protesters advocating for freedom in Hong Kong gathered in the game. Sidelined reality show stars took to the game to ply their trade. Very real people enduring very real layoffs used the game's currency as a method for making very real money. As someone who has never played the game, the picture I'm left with is of a game that is both inherently malleable to what you want to do within it and immensely social in nature.So perhaps it was only a matter of time before one of the major Presidential candidates got involved.
Content Moderation Case Study: Amazon Alters Publishing Rules To Deter Kindle Unlimited Scammers (April 2016)
Summary:In July 2014, Amazon announced its "Netflix, but for ebooks" service, Kindle Unlimited. Kindle Unlimited allowed readers access to hundreds of thousands of ebooks for a flat rate of $9.99/month.Amazon paid authors from a subscriber fee pool. Authors were paid per page read by readers -- a system that was meant to reward more popular writers with a larger share of the Kindle Unlimited payment pool.This system was abused by scammers once it became clear Amazon wasn't spying on Kindle Users to ensure books were actually being read -- i.e., keeping track of time spent on pages of text by readers or total amount of time spent reading. Since Amazon had no way to verify if readers were actually reading the content, scammers deployed a variety of tricks to increase their unearned earnings.Part of the scam relied on Amazon's willingness to pay authors for partially-read books. If only 100 pages of a 500-page book were read, the author still got credit for the 100 pages read by an Unlimited user. Scammers inflated "pages read" counts by moving the table of contents to the end of the book or offering dozens of different languages in the same ebook, relying on readers skipping hundreds of pages into the ebook to access the most popular translation. Other scammers offered readers chances to win free products and gift cards via hyperlinks that brought readers to the end of the scammers' ebooks -- books that sometimes contained thousands of pages.The other part of the scam equation was Amazon's hands-off approach to self-publishing. Amazon has opened its platform and appears to do very little to police the content of ebooks, other than requiring authors to follow certain formatting rules. Amazon is neither a publisher nor an editor, which has created a market for algorithmically-generated content as well as a home for writers seeking a distribution outlet for their bigoted and hateful writing.Once Amazon realized the payout system was being gamed, it altered the way Kindle Unlimited operated. It began removing scammers, notifying authors and customers that it was doing this in response to Unlimited readers' complaints.
Techdirt Podcast Episode 254: Does Amazon Really Have A Data Advantage?
There's a lot of talk about tech companies and antitrust these days, and a great deal of the focus falls on Amazon. But is antitrust law really the right approach, or even capable of achieving the results many people want? This week, we're focusing on one specific complaint that comes up a lot, about Amazon being both a marketplace and a seller in that marketplace and gaining various advantages including, supposedly, from the data it has access to. We're joined by Greg Mercer, founder and CEO of Jungle Scout, to talk about whether Amazon really has a data advantage, and how much it really matters.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Content Moderation Best Practices for Startups
To say content moderation has become a hot topic over the past few years would be an understatement. The conversation has quickly shifted from how to best deal with pesky trolls and spammers  —  straight into the world of intensely serious topics like genocide and destabilization of democracies.While this discussion often centers around global platforms like Facebook and Twitter, even the smallest of communities can struggle with content moderation. Just a limited number of toxic members can have an outsize effect on a community’s behavioral norms.That’s why the issue of content moderation needs to be treated as a priority for all digital communities, large and small. As evidenced by its leap from lower-order concern to front-page news, content moderation is deserving of more attention and care than most are giving it today. As I see it, it’s a first-class engineering problem that calls for a first-class solution. In practical terms, that means providing:
My Comment To The FCC Regarding The Ridiculous NTIA Petition To Reinterpret Section 230
Today is the due date for the first round of submissions to the FCC's comment period on the NTIA's laughable petition, which asks the agency to reinterpret Section 230 in response to the President's temper tantrum about Twitter fact checking him. This is clearly outside of its regulatory authority, but it has caved and pandered to the President by calling for comments anyway.There are a ton of individuals and organizations commenting on why nearly everything around this is unconstitutional and/or outside the FCC's legal authority. The Copia Institute is filing a comment along those lines written by Cathy Gellis, and she'll have a post about that later. However, I wanted to file a separate comment from my own personal perspective about Section 230 and the nature of running a small media website that relies heavily on its protections. Because beyond the various filings from lawyers about this or that specific aspect of the law or Constitutional authority, it appeared that there was little discussion of just how illiterate the NTIA petition is concerning how content moderation works. And, tragically, many of the early filers on the docket were people who were screaming that because some content of theirs had been moderated by a social media company, the FCC must neuter Section 230.The key part of my comment is to reinforce the idea that content moderation is impossible to do well and you will always have some people who disagree with the results, and there will also be many "mistakes" because that's the nature of content moderation. It is not evidence of bias or censorship. And, indeed, as my comment highlights, changing Section 230 to try to deal with these fake problems is only likely to lead to the suppression of more speech and the shrinking of the open internet.Also, I talk about the time I wasn't kicked out of a lunch where I sat next to FCC Chairman Ajit Pai.You can read my entire comment below.If you would like to file a comment on the proceedings and have not yet done so, while the initial round of comments is due today, there is a second round for "responding" to comments made in the first round, which runs through September 17th.
Daily Deal: The Ultimate Artificial Intelligence Scientist Bundle
The Ultimate Artificial Intelligence Scientist Bundle consists of four courses covering Python, Tensorflow, Machine and Deep Learning. You will learn about complex theories, algorithms, coding libraries, Artificial Neural Networks, and Self-Organizing Maps. You'll also learn about the core principles of programming, data validation, automatic dataset preprocessing, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Court Tosses Surreptitious Video Recordings Holding Together Sketchy 'Human Trafficking' Investigation
In early 2019, law enforcement in Florida wrapped up a supposed "human trafficking" sting centering on Florida spas and massage parlors. By the time prosecutors and cops were done congratulating themselves for helping purge Florida of human trafficking, they appeared to have little more than about 150 bog-standard solicitation and prostitution arrests.But they did land a big fish. Robert Kraft -- the owner of the New England Patriots -- was one of the spa customers caught up in the sting. That was the biggest news. Evidence of actual trafficking never appeared, leaving law enforcement with a big name, a bunch of low-level arrests, and little else.What little law enforcement and prosecutors did have is now gone as well. Upholding a lower court's decision on video evidence captured by hidden cameras, a Florida state appeals court says everything captured on the government's secret cameras was illegally obtained. (via FourthAmendment.com)This conclusion was reached even though investigators obtained warrants for the cameras. Here's the backstory on the video recordings, taken from the decision [PDF]:
Trump Wants To Replace FTC Chair Whom He Can't Replace, Because The FTC Is Reluctant To Go After Trump's Social Media Enemies
A few weeks back we wrote about how FTC chair Joe Simons -- while bizarrely complaining about Section 230 blocking his investigations, despite it never actually doing that -- was actually willing to say that Trump's executive order on social media was nonsense (though not in those words). While the FCC caved and moved forward with its nonsense exploration of Section 230, the FTC has done nothing, because there's nothing for it to actually do.And apparently our narcissist in chief is upset about that. Politico reports that the White House has been interviewing possible replacements for Simons because they want someone who will punish Trump's mythical list of enemies among social media companies (even as those companies have bent over backwards to accommodate his nonsense):
...208209210211212213214215216217...