Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2026-01-16 08:17
Daily Deal: Ultimate Web Development eBook and Course Bundle
SitePoint's $19 Ultimate Web Development eBook and Course Bundle will show you how to start your journey as a front-end web developer, giving you access to 7 best-selling ebooks and more than 21 hours of instructional video. You will learn about popular languages and frameworks like HTML5, CSS3, JavaScript, and Angular 2. You will be have your first websites up and running in no time.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
End Of An Era: Saying Goodbye To John Perry Barlow
I was in a meeting yesterday, when the person I was meeting with mentioned that John Perry Barlow had died. While he had been sick for a while, and there had been warnings that the end might be near, it's still somewhat devastating to hear that he is gone. I had the pleasure of interacting with him both in person and online multiple times over the years, and each time was a joy. He was always, insightful, thoughtful and deeply empathetic.I can't remember for sure, but I believe the last time I saw him in person was a few years back at a conference (I don't even recall what conference), where he was on a panel that had no moderator, and literally seconds before the panel was to begin, I was asked to moderate the panel with zero preparation. Of course, it was easy to get Barlow to talk, and to make it interesting, even without preparation. But that day the Grateful Dead's Bob Weir (for whom Barlow wrote many songs -- after meeting as roommates at boarding school) was in the audience -- and while the two were close, they disagreed on issues related to copyright, leading to a public debate between the two (even though Weir was not on the panel). It was fascinating to observe the discussion, in part because of the way in which Barlow approached it. Despite disagreeing strongly with Weir, the discussion was respectful, detailed and consistently insightful.Lots of people are, quite understandably, pointing to Barlow's famous Declaration of the Independence of Cyberspace (which was published 22 years ago today). Barlow later admitted that he dashed most of that off in a bar during the World Economic Forum, without much thought. And that's why I'm going to separately suggest two other things by Barlow to read as well. The first was his Wired piece, The Economy of Ideas from 1994, the second year of Wired's existence, and where Barlow's wisdom was found in every issue. Despite being written almost a quarter of a century ago, The Economy of Ideas is still fresh and relevant today. It is more thoughtful and detailed than his later "Declaration" and, if anything, I would imagine that Barlow was annoyed that the piece is still so relevant today. He'd think we should be way beyond the points he was making in 1994, but we are not.The other piece is more recent I've seen a few people pointing to is his Principles of Adult Behavior, which are a list of 25 rules to live by -- rules that we should be reminded of constantly. Rules that many of us (and I'm putting myself first on this list) fail to live up to all too frequently. Update I stupidly assumed that was a more recent writing by Barlow, but as noted in the comments (thanks!) it's actually from 1977 when Barlow turned 30.Cindy Cohn, who is now the executive director of EFF, which Barlow co-founded, mentions in her writeup how unfair it is that Barlow (and, specifically his Declaration) are often held up as the kind of prototype for the "techno-utopian" vision of the world that has become so frequently mocked today. Yet, as Cohn points out, that's not at all how Barlow truly viewed the world. He saw the possibilities of that utopia, while recognizing the potential realities of something far less good. The utopianism that Barlow presented to the world was not -- as many assume -- him claiming these things were a sort of manifest destiny, but rather by presenting such a utopia, we might all strive and push and fight to actually achieve it.
FCC Refuses To Release FOIA Documents Pertaining To Its Stupid Verizon 'Collusion' Joke
You might recall that right before the FCC voted to kill net neutrality at Verizon's behest, the agency thought it would be a hoot to joke about the agency's "collusion" with Verizon at a telecom industry gala. The lame joke was a tone-deaf attempt to mock very legitimate concerns that Pai, a former Verizon regulatory lawyer, is far too close to the industry he's supposed to be regulating. The FCC even went so far as to include a little video featuring Verizon executives, who chortled about their plans to install Pai as a "puppet" leader at the agency. Hilarious.While the audience of policy wonks and lobbyists giggled, the whole thing was tone deaf and idiotic from stem to stern. Especially given the fact that Pai's policies have been nothing short of a Verizon wish list, whether that involves protecting Verizon's monopoly over business data services (BDS), or the efforts to undermine any attempts to hold Verizon accountable for repeated privacy violations. Much like the other lame video Pai circulated at the time to make light of consumer outrage, it only served to highlight how viciously out of touch this FCC is with the public it's supposed to be looking out for.Gizmodo recently filed a FOIA request to obtain any communications between the FCC and Verizon regarding the creation of the video, arguing the records were well within the public interest given concerns over Pai's cozy relationship with the companies he's supposed to be holding accountable. But Gizmodo says the FCC refused the request under Exemption 5 of the FOIA (Deliberative Process Privilege). While the request revealed around a dozen pages of e-mails between the FCC and Verizon, the FCC refuses to release them, arguing they could harm the ability of the agency to do its job (read: kiss Verizon's ass):
Judge Tells CIA It Can't Hand Classified Info To Journalists And Pretend The Info Hasn't Been Made Public
The CIA is spectacularly terrible at responding to FOIA requests. It's so bad it's highly possible the perceived ineptness is deliberate. The CIA simply does not want to release documents. If it can't find enough FOIA exemptions to throw at the requester, it gets creative.A FOIA request for emails pertaining to the repeated and extended downtime suffered by the (irony!) CIA's FOIA request portal was met with demands for more specifics from the requester. The CIA wanted things the requester would only know after receiving the emails he requested, like senders, recipients, and email subject lines.The CIA sat on another records request for six years before sending a letter to the requester telling him the request would be closed if he did not respond. To be fair, the agency had provided him a response of sorts five years earlier: a copy of his own FOIA request, claiming it was the only document the agency could locate containing the phrase "records system."In yet another example of CIA deviousness, the agency told a requester the documents requested would take 28 years and over $100,000 to compile. Then it went even further. During the resulting FOIA lawsuit, the DOJ claimed the job was simply too impossible to undertake. Less than 2 months after MuckRock's successful lawsuit, the entire database went live at the CIA's website -- more than 27 years ahead of schedule.This is the CIA's antipathy towards the FOIA process on display. It takes a lawsuit to get it to produce documents. And what we have here is more CIA recalcitrance being undercut by an FOIA lawsuit.Journalist Adam Johnson sued the agency early last year for its refusal to produce correspondence between the CIA's Office of Public Affairs and prominent journalists. Johnson did receive copies of these emails, but the CIA redacted the emails they had sent to journalists. (The journalists' response were left unredacted.) Since the emails obviously weren't redacted when they were sent to journalists, Johnson challenged the redactions in court.The government argued it had a right to disclose classified information to journalists. And it certainly can. The CIA can waive classification if it so desires. But what it can't do is claim it has never released this classified info to the public -- not if it's handing it out to journalists.Daniel Novak is representing the journalist in his FOIA lawsuit. And he reports the judge is no more impressed by the CIA's arguments than his client is. The decision [PDF] is redacted but some very nice bench slaps have been left untouched... like this one, which sums up the ridiculousness of the CIA's arguments.
Moosehead Breweries Cuts And Runs From Trademark Suit Against Hop 'N Moose Brewing
For the past few years, we have detailed several trademark actions brought by Moosehead Breweries Limited, the iconic Canadian brewery that makes Moosehead beer, against pretty much every other alcohol-related business that dares to use the word "moose" or any moose images. This recent trend has revealed that Moosehead is of the opinion that only it can utilize the notorious animal symbol of both Canada and the northern United States. Without any seeming care for whether actual confusion might exist in the marketplace, these actions by Moosehead have instead smacked of pure protectionism over a common word and any and all images of a common animal.One of those actions included a suit against Hop 'N Moose Brewing, a small microbrewery out of Vermont. The filing in that case was notable in that it actually alleged detailed examples of trade dress infractions, while the images of the trade dress included in the filing appeared to be fairly distinct. Absent, of course, was any evidence of actual confusion in the marketplace. It appeared for all the world that Moosehead's legal team seemed to take past criticism of its trademark protectionism as a critique of the word and image count in its filings and simply decided to up the volume on both ends. Since late last year, despite having done all of this legal literary work to support the suit, little if anything had been litigated after the initial filing.And now it seems this whole thing will suddenly go away. Without any real explanation from either party, Moosehead has dropped its suit entirely.
Director Of Thor: Ragnarok Pirated Clips For His Sizzle Reel
With the constant drumbeat of the evils of copyright infringement and internet piracy being issued from those leading the movie industry, you might have been under the impression everyone within the industry held the same beliefs. Between the cries of lost profits, the constant calls for the censorship of websites, and even the requests to roll back safe harbor protections that have helped foster what must be considered a far larger audience for the industry, perhaps you pictured the rank and file of the movie business as white-clad monk-like figures that served as paragons of copyright virtue.Yet that's often not the case. While many artists, actors, and directors do indeed toe the industry line on matters of piracy, you will occasionally get glimpses of what has to be considered normalcy in how people engage with copyright issues among members of the industry. We should keep in mind our argument that essentially everyone will infringe on intellectual property at some point, often times without knowing or intending it, because engaging in said behavior just seems to make sense. During a radio interview Taiki Waititi did to promote Thor: Ragnarok, which he directed, he admitted to doing it himself.
Why (Allegedly) Defamatory Content On WordPress.com Doesn't Come Down Without A Court Order
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we're publishing a bunch of these essays, including this one.WordPress.com is one of the most popular publishing platforms online. We host sites for bloggers, photographers, small businesses, political dissidents, and large companies. With more than 70 million websites hosted on our service, we unsurprisingly receive complaints about all types of content. Our terms of service define the categories of content that we don't allow on wordpress.com.We try to be as objective as possible in defining the categories of content that we do not allow, as well as in our determinations about what types of content fall into, or do not fall into, each category. For most types of disputed content, we have the competency to make a judgment call about whether it violates our terms of service.One notable and troublesome exception is content that is allegedly untrue or defamatory. Our terms prohibit defamatory content, but it's very difficult if not impossible for us, as a neutral, passive host, to determine the truth or falsity of a piece of content hosted on our service. Our services are geared towards the posting of longer form content and we often receive defamation complaints aimed at apparently well-researched, professionally written blog posts or pieces of journalism.Defamation complaints put us in the awkward position of making a decision about whether the contents of a website are true or false. Moreover, in jurisdictions outside of the United States, these complaints put us on the hook for legal liability and damages if we don't take the content down after receiving an allegation that it is not true.Making online hosts and other intermediaries like WordPress.com liable for the allegedly defamatory content posted by users is often criticized for burdening hosts and stifling innovation. But intermediary liability isn't just bad for online hosts. It's also terrible for online speech. The looming possibility of writing a large check incentivizes hosts like Automattic to do one thing when we first receive a complaint about content: Remove it. That decision may legally protect the host, but it doesn't protect users or their online speech.The Trouble with "Notice and Takedown"Taken at face value, the notice-and-takedown approach might seem to be a reasonable way to manage intermediary liability. A host isn't liable absent a complaint, and after receiving one, a host can decide what to do about the content.Internet hosts like Automattic, however, are in no position to judge disputes over the truth of content that we host. Setting aside the marginal number of cases in which it is obvious that content is not defamatory—say, because it expresses an opinion—hosts are not at all equipped to determine whether content is (or is not) true. We can't know whether the subject of a blog post sexually assaulted a woman with whom he worked, if a company employs child laborers, or if a professor's study on global warming is tainted by her funding sources. A host does not have subpoena power to collect evidence. It does not call witnesses to testify and evaluate their credibility. And a host is not a judge or jury. This reality is at odds with laws imputing knowledge that content is defamatory (and liability) merely because a host receives a complaint that content is defamatory and doesn't remove it right away.Nevertheless, the prospect of intermediary liability encourages hosts to make a judgment anyway, by accepting a complaint at face value and removing the disputed content without any vetting by a court. This process, unfortunately, encourages and rewards abuse. Someone who does not like a particular point of view, or who wants to silence legitimate criticism, understands that he or she has decent odds of silencing that speech by lodging a complaint with the website's host, who often removes the content in hopes of avoiding liability. That strategy is much faster than having the allegations tried in a court, and as a bonus, the complainant won't face the tough questions—Did he assault a co-worker? Did she know that the miners were children? Did he fib his research?The potential for abuse is not theoretical. We regularly see dubious complaints about supposedly defamatory material at WordPress.com. Here is a sampling:
New Jersey The Latest State To Protect Net Neutrality By Executive Order
The Trump FCC is currently in the process of trying to eliminate all meaningful oversight of some of the least competitive companies in America. Not only are broadband providers and the Trump administration trying to gut FTC and FCC oversight of companies like Comcast, they're also trying to ban states from protecting net neutrality or broadband consumer privacy at ISP lobbyist behest. This is all based on the belief that letting Comcast run amok somehow magically forges telecom Utopia. It's the kind of thinking that created Comcast and the market's problems in the first place.And while the Trump FCC is trying to ban states from protecting consumers in the wake of federal apathy (you know, states rights and all that), the individual states don't appear to be listening. Numerous states are pushing new legislation that effectively codifies the FCC's 2015 net neutrality rules on the state level, efforts that will be contested in the courts over the next few years. ISPs have been quick to complain about the threat of multiple, discordant and shitty state laws, ignoring the fact that they created this problem by lobbying to kill reasonable (and popular) federal protections.Other states, like Montana and New York have gotten more creative, signing executive orders that ban ISPs from winning state contracts if they violate net neutrality. Montana Governor Steve Bullock went so far as to suggest that other states use his order as a template, something New Jersey appears to have taken him up on. The state this week issued its own executive order (pdf) protecting net neutrality, modifying the state procurement process to prohibit state contracts with ISPs that routinely engage in anti-competitive blocking, throttling, or paid prioritization.In a press release, state leaders say the new rules will take effect in July:
Single-Pixel Tracker Leads Paranoid Turkish Authorities To Wrongly Accuse Over 10,000 People Of Treason
We've written many articles about the thin-skinned Turkish president, Recep Tayyip ErdoÄŸan, and his massive crackdown on opponents, real or imagined, following the failed coup attempt in 2016. Boing Boing points us to a disturbing report on the Canadian CBC News site revealing how thousands of innocent citizens have ended up in prison because they were falsely linked with the encrypted messaging app Bylock:
Daily Deal: PocketSmith Subscriptions
It's one thing to be putting money aside in a 401(k) account or investing it in the stock market — but nobody's relationship with their money is the same as anyone else's relationship with theirs. PocketSmith recognizes that, which is why it designed a comprehensive set of features to give you absolute control over your money. You can see all your bank, credit card and loan accounts in one place, keep it all automatically updated, and organize your transactions as granularly as you like. Beyond tracking the past and present, however, PocketSmith is also forecasting tool. You can see how your savings will reward you by revealing your projected daily balances up to 10 years in the future, allowing you to get a better financial picture. The 1 year Premium (10 accounts / 10 years projection) subscription is on sale for $49.95 and the 1 year Super (30 accounts / 30 years projection) subscription is on sale for $69.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
On The Internet, Everyone Is A Creator
Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »One theme that we've covered on Techdirt since its earliest days is the power of the internet as an open platform for just about anyone to create and communicate. Simultaneously, one of our greatest fears has been how certain forces -- often those disrupted by the internet -- have pushed over and over again to restrict and contain the internet, and turn it into something more like a broadcast platform controlled by gatekeepers, where only the chosen few can use it to create and share. This is one of the reasons we've been so adamant over the years that in so many policy fights, "Silicon Valley v. Content" is a false narrative. It's almost never true -- because the two go hand in hand. The internet has made it so that everyone can be a creator. Internet platforms have made it so that anyone can create almost any kind of content they want, they can promote that content, they can distribute it, they can build a fan base, and they can even make money. That's in huge contrast to the old legacy way of needing a giant gatekeeper -- a record label, a movie studio, or a book publisher -- to let you into the exclusive club.And yet, those legacy players continue to push to make the internet into more of a broadcast medium -- to restrict that competition, to limit the supply of creators and to push things back through their gates under their control. For example, just recently, the legacy recording and movie industries have been putting pressure on the Trump administration to undermine the internet and fair use in NAFTA negotiations. And, much of their positioning is that the internet is somehow "harming" artists, and needs to be put into check.This is a false narrative. The internet has enabled so many more creators and artists than it has hurt. And to help make that point, today we're launching a new site, EveryoneCreates.org which features stories and quotes from a variety of different creators -- including bestselling authors, famous musicians, filmmakers, photographers and poets -- all discussing how important an open internet has been to building their careers and creating their art. On that same page, you can submit your own stories about how the internet has helped you create, and why it's important that we don't restrict it. Please add your own stories, and share the site with others too!The myth that this is "internet companies v. creators" needs to be put to rest. Thanks to the internet, everyone creates. And let's keep it that way.Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »
FCC Report Falsely Claims Killing Net Neutrality Already Helping Broadband Competition
For years the FCC has been caught in a vicious cycle. Under the Communications Act, the FCC is required to issue annual reports on the state of U.S. broadband and competition, taking action if services aren't being deployed in a "reasonable and timely" basis. When under the grip of regulatory capture and revolving door regulators, these reports tends to be artificially rosy, downplaying or ignoring the lack of competition that should be obvious to anybody familiar with Comcast. These folks' denial of the sector's competition shortcomings often teeters toward the comical and is usually hard to miss.When the agency has more independently-minded leadership (which admittedly doesn't happen often), the report tends to accurately show how the majority of consumers lack real options and quality broadband. That was the case under former FCC boss Tom Wheeler, whose agency not only raised the definition of broadband to 25 Mbps (which greatly angered the industry), but actually went out of its way to highlight the fact that two-thirds of American homes lack access to FCC-defined speeds of 25 Mbps from more than one ISP (aka a monopoly).Unsurprisingly, the Trump FCC is now taking things back in the rose-colored glasses direction. The agency's latest Broadband Deployment Report (pdf) proudly declares that United States broadband is now, quite magically, being deployed in a "reasonable and timely basis." An accompanying press release (pdf) similarly tries to claim that things are only getting better, thanks in large part to Ajit Pai's historically-unpopular attack on net neutrality:
Court Shuts Down Trooper's Attempt To Portray New-ish Minivans With Imperfect Drivers As Justification For A Traffic Stop
Anything you do can be suspicious. Just ask our guardians of public safety. People interacting with law enforcement can't be too nervous. Or too calm. Or stare straight ahead. Or directly at officers. When traveling, travelers need to ensure they're not the first person off the plane. Or the last. Or in the middle. When driving, people can't drive too carefully. Or too carelessly. Traveling on interstate highways is right out, considering those are used by drug traffickers. Traveling along back roads probably just looks like avoiding more heavily-patrolled interstates, thus suspicious.Having too much trash in your car might get you labelled a drug trafficker -- someone making a long haul between supply and destination cities. Conversely, a car that's too clean looks like a "trap" car -- a vehicle carefully kept in top condition to avoid raising law enforcement's suspicion. Too clean is just as suspicious as too dirty. Air fresheners, a common fixture in vehicles, are also suspicious. Having too many of them is taken as an attempt to cover the odor of drugs. There's no specific number that triggers suspicion. It's all left up to the officer on the scene.So, avoiding rousing suspicion is impossible. Fortunately, courts can push back against law enforcement assertions about suspicious behavior. Some have pushed back more forcibly than others. Thanks to another court pushback, we have two new items to add to the list of suspicious indicators. From the Texas Appeals Court decision [PDF]:
BrewDog Beats Back Trademark Action From The Elvis Presley Estate
In the middle of summer last year, we discussed a somewhat strange trademark fight between BrewDog, a Scottish Brewery that has been featured in our pages for less than stellar reasons, and the Elvis Presley Estate. At issue was BrewDog's attempt to trademark the name of one of its beers, a grapefruit IPA called "Elvis Juice." With no other explanation beyond essentially claiming that any use of Elvis everywhere will only be associated in the public's mind as being affiliated by the 1950s rock legend, the Estate opposed the trademark application. Initially, the UK Intellectual Property Office sided with the Estate, despite the owners of BrewDog both pointing out that they were simply using a common first name and that they were actually taking the legal course of changing their first names to Elvis to prove their point. Not to mention that the trade dress for the beer has absolutely nothing to do with Elvis Presley. We wondered, and hoped, at the time if BrewDog would appeal the decision.Well, it did, and it has won, which means Elvis Juice is free to exist and the order that BrewDog pay the Elvis Estate costs for its opposition be vacated.
Classified Cabinet Docs Leak Down Under Via An Actual Cabinet Sale... Just As Aussies Try To Outlaw Leaking
Back in December, we reported on an effort underway in Australia to criminalize both whistleblowers and journalists who publish classified documents with up to 20 years in prison. 20 years, by the way, is also the amount of time that Cabinet documents are supposed to be kept classified in Australia. But just recently Australia's ABC news suddenly started breaking a bunch of news that appeared to come from access to Cabinet documents that were still supposed to be classified. This included stories around ending welfare benefits for anyone under 30 years old as well as delaying background checks on refugees. Some explosive stuff.On Wednsday, ABC finally revealed where all this stuff came from. It wasn't an Australian Ed Snowden. It was... government incompetence. Apparently, someone bought an old filing cabinet from a store that sells second-hand government office furniture. The cabinet had no key, so he drilled the lock and... found a ton of Cabinet documents in an actual cabinet.So... if that law were to go through in Australia... would that mean the government employee who didn't check the filing cabinet would get 20 years in jail? Or the store that sold out? Or the guy that drilled it? Or do all of them get 20 years? Why don't we just support whistleblowers and the press for reporting on important news that the public should know about?
Techdirt Podcast Episode 153: An Interview With Rep. Zoe Lofgren
When it comes to many of the legislative issues of interest to us here at Techdirt, we've always been able to count on at least one voice of reason amidst the congressional chaos: Representative Zoe Lofgren from California. In addition to playing a critical role in the fight against SOPA, she continues to be a voice of reason against bad copyright policy, expansive government surveillance, and the broken CFAA, among many other things. This week, she joins Mike on the podcast for a wide-ranging discussion about these topics and more.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Moderation Is The Commodity
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms' own definition of success includes failing users on a regular basis.The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.~~~All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this—nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users—and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to "join a thriving community" or "broadcast yourself," to Mark Zuckerberg's promise to make Facebook "the social infrastructure to give people the power to build a global community that works for all of us."Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the "right" feed for each user, the "right" social exchanges, the "right" kind of community. ("Right" here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this "customer service" mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible—even if they never see it. Whether I ever saw, clicked on, or ‘liked' a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: "The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for." What makes something of concern to the public is the potential need for its inhibition.So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility—not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.~~~If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.
Hacker Lauri Love Wins Extradition Appeal; Won't Be Shipped Off To The US
We've been writing about the saga of Lauri Love for almost four years now. If you don't recall, he's the British student who was accused of hacking into various US government systems, and who has been fighting a battle against being extradited to the US for all these years. For those of you old timers, the situation was quite similar to the story of Gary McKinnon, another UK citizen accused of hacking into US government computers, and who fought extradition for years. In McKinnon's case, he lost his court appeals, but the extradition was eventually blocked by the UK's Home Secretary... Theresa May.In the Lauri Love case, the situation went somewhat differently. A court said Love could be extradited and current Home Secretary Amber Rudd was happy to go along with it. But, somewhat surprisingly, an appeals court has overruled the lower court and said Love should not be extradited:
Daily Deal: WhiteSmoke Premium
Even the best writers make errors, WhiteSmoke checks your work for grammar, spelling, punctuation, and style errors - so you never send off a flawed work email again. Whether you're writing on mobile or desktop, this easy-to-use software is compatible with all browsers, includes a translator for over 50 languages, and lets you perfect your writing virtually anywhere you do it. A 1 year subscription is on sale for $19.99 or pay once for unlimited access for $69.99.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Canadian Privacy Commissioner Report Says Existing Law Already Gives Canadians A Right To Be Forgotten
The Privacy Commissioner of Canada is proposing something dangerous. Given the Canadian Supreme Court's ruling in the Equustek case -- which basically said Canada's laws are now everybody's laws -- a recent report issued by the Commissioner that reads something into existing Canadian law should be viewed with some concern. Michael Geist has more details.
Trump's FCC Pats Itself On The Back For A Historically Stupid Year
If you've been playing along at home, Trump's FCC hasn't been particularly kind to consumers, competition, or the health of the internet. It has, however, been a massive boon to major ISPs terrified of disruption and competition, especially those looking to forge new media monopolies where they dominate both the conduit -- and the content -- coming to the home.Under Pai, the FCC has gutted broadband programs for the poor, protected the cable industry's monopoly over the cable box from competition, made it easier for prison phone monopolies to rip off inmate families, dismantled generations old media consolidation rules simply to aid Sinclair Broadcasting's merger ambitions, killed meaningful broadband privacy protections, tried to weaken the standard definition of broadband (to help hide competition gaps) and weakened rules preventing business broadband and backhaul monopolies from abusing smaller competitors, hospitals, or schools.And that's before you even get to Pai's attack on net neutrality, potentially one of the least popular tech policy decisions in the history of the modern internet. That entire calamity is a universe unto itself, with the FCC currently under investigation for turning a blind eye to identity theft and fraud during the open comment period, as well as for bizarrely making up a DDOS in a ham-fisted attempt to downplay the public's disdain for Pai's agenda. It will take many years and numerous lawsuits for the problems with Pai's rushed repeal of the rules to fully materialize.With Pai's tenure seen as a shitshow in the wake of the net neutrality repeal, the FCC recently tried to undertake an image reclamation effort. That came in the form of a press release (pdf) lauding what the FCC calls a "year of action and accomplishment" in terms of "protecting consumers," "promoting investment," and "bridging the digital divide." You just know the FCC under Pai is doing a good job because, uh, graphics:Amusingly, the lion's share of the agency's listed "accomplishments" were noncontroversial projects simply continued from the last FCC under Tom Wheeler. That includes efforts to open additional spectrum for wireless use, attempts to speed up cell tower placement, or ongoing efforts to reduce robocalls (the impacts of which aren't apparent). Many of the listed efforts are just the FCC doing its job, ranging from conducting an investigation into the recently botched Hawaii ballistic missile snafu, to "approving new wireless charging tech" that nobody thought should be blocked anyway.Elsewhere, the agency's accomplishment list engages in willful omission. For example, while the FCC pats itself on the back for creating a "broadband deployment advisory council," it ignores the fact that said counsel is plagued by allegations of cronyism and dysfunction in the wake of recent resignations. The FCC similarly pats itself on the back for the agency's Puerto Rico hurricane response, despite the fact that locals there say the federal government and the FCC failed spectacularly in its response to the storm.But it's the agency's claims of consumer protection that continue to deliver the best unintentional comedy. As you might expect, Pai's FCC continues to claim that killing net neutrality rules was a good thing because the rules devastated sector investment, a proven lie the agency simply can't stop repeating:
Missouri Governor Sued Over His Office's Use Of Self-Destructing Communications
Missouri governor Eric Greitens, along with his staff, are the targets of a recently-filed public records-related lawsuit [PDF]. Two St. Louis County attorneys are accusing the governor of dodging public records laws with his use of Confide, an app that deletes text messages once they're read and prevents users from saving, forwarding, printing, or taking screenshots of the messages.The governor's use of the app flies in the face of the presumption of openness. The attorneys are hoping the court will shut down the use of Confide to discuss official state business. The governor has argued an injunction would constitute prior restraint.
Two Years Later, Bell's Brewery Finally Fails To Bully A Tiny Brewery Out Of Its Legitimate Trademark
Nearly three years ago, Bell's Brewery, whose products I used to buy greedily, decided to oppose a trademark for Innovation Brewing, a tiny operation out of North Carolina. The reasons for the opposition are truly difficult to comprehend. First, Bell's stated that it uses the slogan "Bottling innovation since 1985" on some merchandise. This was only barely true. The slogan does appear on some bumper stickers that Bell sells and that's pretty much it. It appears nowhere in any of the brewery's beer labels or packaging. Also, Bell's never registered the slogan as a trademark. Bell's also says it uses the slogan "Inspired brewing" and argues that Innovation's name could create confusion in the marketplace because it's somehow similar to that slogan.This is a good lesson in why trademark bullying of this nature is a pox on any industry derived largely of small players, because it's only in the past weeks that the Trademark Trials and Appeals Board in Virginia has ruled essentially that Bell's is full of crap.
Tarnishing The History Of Martin Luther King Jr.: Copyright Enforcement Edition
It is no secret that the estate of Martin Luther King Jr. have a long and unfortuate history of trying to lock up or profit from the use of his stirring words and speeches. We've talked about this issue going back nearly a decade and it pops up over and over again. By now you've probably heard that the car brand Dodge (owned by Chrysler) used a recording of a Martin Luther King Jr. speech in a controversial Super Bowl ad on Sunday. It kicked up quite a lot of controversy -- even though his speeches have been used to sell other things in the past, including both cars and mobile phones.King's own heirs have been at war with each other and close friends in the past few years, suing each other as they each try to claim ownership over rights that they don't want others to have. Following the backlash around the Super Bowl ad, the King Center tried to distance itself from the ad, saying that they have nothing to do with approving such licensing deals:
Putting Pinners First: How Pinterest Is Building Partnerships For Compassionate Content Moderation
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Last week we published five of those essays and this week we're continuing to publish more of them, including this one.The way platforms develop content moderation rules can seem mysterious or arbitrary. At first glance, the result of this seemingly inscrutable process is varying guidelines across different platforms, with only a vague hint of an industry standard -- what might be banned on one platform seems to be allowed on another. While each platform may have nuances in the way they create meaningful content moderation rules, these teams generally seek to align with the platform's/company's purpose, and use policies and guidelines to support an overarching mission. Different platforms delivering unique value propositions to users' accounts for variations in content moderation approaches.At Pinterest, our purpose is clear: we help people discover and do what they love by showing them ideas that are relevant, interesting, and personal. For people to feel confident and encouraged to explore new possibilities, or try new things on Pinterest, it's important that the Pinterest platform continues to prioritize an environment of safety and security. To accomplish that, a team of content policy professionals, skilled in collaborating across different technical and non-technical functions at the company, decide where we draw the lines on what we consider acceptable boundaries for content and behavior. Drawing upon the feedback of Pinterest users, and staying up to date on prevailing discourse about online content moderation, this team of dedicated content generalists brings diverse perspectives to bear upon the guidelines and processes that keep divisive, disturbing, or unsafe content off Pinterest.We know how impactful Pinterest can be in helping people make decisions in their daily life, like what to eat or what to wear, because we hear directly from the Pinterest community. We've also heard how people use Pinterest to find resources to process illness or trauma they may have experienced. Sometimes, the content that people share during these difficult moments can be polarizing or triggering to others, and we have to strike the right balance of letting people rely on Pinterest as a tool for navigating these difficult issues, and living up to our goal of removing divisive, disturbing, or unsafe content. As a team, we have to consider the broad range of use cases for content on Pinterest. For example, important historical yet graphic images of war can be collected in the context of learning about world events, or to glorify violence. Our team takes different contextual signals into account during the review process in order to make meaningful content moderation choices that ensure a positive experience for our community. If we wish to have the impact we hope to have in people's lives, we must also take responsibility for their entire experience.To be responsible for the online environment that our community experiences, and to be aware of how that experience connects in a concrete way to their life offline, means we cultivate the humility to realize our team's limitations. We can't claim to be experts in fields like grief counseling, eating disorder treatment, or suicide prevention -- areas that many groups and individuals have dedicated their careers to supporting -- so it's crucial that we partner with experts for the guidance, specialized skills, and knowledge that will enable us to better serve our community with respect, sensitivity, and compassion.A couple years ago, we began reexamining our approach to one particularly difficult issue - eating disorders - to understand the way our image-heavy platform might contribute to perpetuating unhealthy stereotypes about the ideal body. We had already developed strict rules about content promoting self-harm, but wanted to ensure we were being thoughtful about content offering "thinspiration" or unhealthy diets from all over the internet. To help us navigate this complicated issue, we sought out the expertise of the National Eating Disorder Association (NEDA) to audit our approach, and understand all of the ways we might engage with people using the platform in this way.Prior to reaching out to NEDA, we put together a list of search queries and descriptive keyword terms that we believed strongly signaled a worrying interest in self-harm behaviors. We limit the search results we show when people seek out content using these queries, and also use these terms as a guide for Pinterest's operational teams to decide if any given piece of self-harm-related content should be removed or hidden from public areas of the service. The subject matter experts at NEDA generously agreed to review our list to see if our bar for problematic terms was consistent with their expert knowledge, and they provided us with the feedback we needed to ensure we were aligned. We were relieved to hear that our list was fairly comprehensive, and that our struggle with grey area queries and terms was not unique. Since beginning that partnership with NEDA, they have developed a rich Pinterest profile to inspire people by sharing stories of recovery, content about body positivity, and tips for self-care and illness management. By maintaining a dialogue with NEDA, the Pinterest team has continued to consider and operationalize innovative features to facilitate possible early intervention on the platform. For example, we provide people seeking eating disorder content with an advisory that also links to specialized resources on NEDA's website, and supported their campaign for National Eating Disorder Awareness Week. Through another partnership and technical integration with Koko, a third party service that provides platforms with automated and peer-to-peer chat support for people in crisis, we're also able to provide people who may be engaging in self-harm behaviors with direct, in-the-moment crisis prevention.Maintaining a safe and secure environment in which people can feel confident to try new things requires a multifaceted approach and multifaceted perspectives. Our team is well-equipped to grapple with broad online safety and content moderation issues, but we have to recognize when we might lack in-house expertise in more complex areas that require additional knowledge and sensitivity. We have much more work to do, but these types of partnerships help us adapt and grow as we continue to support people using Pinterest to discover and do the things they love.Adelin Cai runs the Policy Team at Pinterest
Study Suggests Shutting Down Filesharing Sites Would Hurt Music Industry, New Artists
The evolution of the music industry's response to the fact that copyright infringement exists on the internet has been both plodding and frustrating. The industry, which has gone through stages including a focus on high-profile and punitive lawsuits against individual "pirates", its own flavors of copyright trolling, and misguided attempts to "educate" the masses as to why their natural inclinations are the worst behavior ever, have since settled into a mantra that site-blocking censorship of the internet is the only real way to keep the music industry profitable. All of this stems from a myopic view on piracy held by the industry that it is always bad for every artist any time a music file is downloaded for free as opposed to purchased off of iTunes or wherever. We have argued for years that this view is plainly wrong and far too simplistic, and that there is actually plenty of evidence that, for a large portion of the music industry, piracy may actually be a good thing.Well, there has been an update to a study first publicized as a work in progress several years ago run by the Information Economics and Policy Journal out of Queen's University. Based on that study, it looks like attempts to shut down filesharing sites would not just be ineffectual, but disastrous for both the music industry as a whole and especially new and smaller-ticket artists. The most popular artists, on the other hand, tend to be more hurt by piracy than helped. That isn't to be ignored, but we must keep in mind that the purpose of copyright law is to get more art created for the benefit of the public and it seems obvious that the public most benefits from a larger successful music ecosystem as opposed to simply getting more albums from the largest audiences.The methodology in the study isn't small peanuts, either. It considered 250,000 albums across five million downloads and looked to match up the pirating of those works and what effect that piracy had in the market for that music.
'Catalog Of Missing Devices' Compiles The Useful Tech Products DRM Is Preventing Us From Owning
What has DRM taken from the public? Well, mainly it's the concept of ownership. Once an item is purchased, it should be up to the customer to use it how they want to. DRM alters the terms of the deal, limiting customers' options and, quite often, routing them towards proprietary, expensive add-ons and repairs.But the question "What would we have without DRM?" is a bit more slippery. The answers are speculative fiction. This isn't to say the answers are unimportant. It's just that it's tough to nail down conspicuous absences. The nature of DRM is that you don't notice it until it prevents you from doing something you want to do.DRM -- and its enabler, the anti-circumvention clause of the DMCA -- ties customers to printer companies' ink. It ties Keurig coffee fans to Keurig-brand pods. It prevents farmers from repairing their machinery and prevents drivers from tinkering with their cars. It prevents the creation of backups of digital and physical media. It can even keep your cats locked out of their pricey restroom.To better show how DRM is stifling innovation, Boing Boing's Cory Doctorow and the EFF have teamed up to produce a catalog of "missing devices": useful tech that could exist, but only without DRM.
Daily Deal: Takieso Walnut Qi Charger
Qi wireless charging is all the rage now that the new iPhones are finally compatible with it. Handcrafted from North American walnut wood, this minimalist charger is easily portable and capable of charging your smartphone fast. Yoiur device is automatically disconnected when it is fully charged and a smart light indicates when the power is on, charging status, and fully charged status. The Takieso Walnut Qi Charger is on sale for $34.90.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Public School Board Member Threatens Boss Of Woman Who Spoke Out Against School Book Banning
The VC Star has a slightly bizarre article about a school board trustee of the Conejo Valley Unified School District (in Southern California) named Mike Dunn, who apparently was upset about a speech given by a mother at a board meeting. That mother -- Jessica Weihe -- also blogs on the site AnonymousMommy.com (though as far as I can tell, she was not "anonymous" in that people in the community appeared to know who she was). Weihe gave a perhaps slightly rambling speech at a recent board meeting. The details appear to be somewhat specific to some district policies on handling "mature" books, but suffice it to say that it appears that Dunn was arguing against certain books being on the curriculum because he felt their content was inappropriate. Among the books that there was some controversy about was Sherman Alexie's quite well known book The Absolutely True Diary of a Part-Time Indian. Weihe's speech mocked Dunn for having tried to get it off the curriculum, and accuses him of not having read the book, and over-reacting to why it might be a problem. Here's a snippet from what she said:
Devin Nunes Releases Memo That Doesn't Show The Surveillance Abuses He Hypocritically 'Cares' About
House intelligence oversight leader Devin Nunes released his supposed bombshell Friday. The Nunes memo was supposed to contain info showing the FBI had engaged in a questionable, politically-motivated investigation of Trump staff. How this news was supposed to be shocking was anyone's guess. Anyone who has followed the FBI's activities since the days of J. Edgar Hoover already knows the FBI engages in questionable, politically-motivated investigations. The only new twist is the FISA court's involvement and the use of secretive surveillance powers to collect domestic communications.The FBI responded by noting the memo [PDF] contained "material omissions of fact." What's contained in the memo likely provides rhetorical ammo to those who believe Trump and his advisors did nothing wrong during the run-up to the election. But it will only provide limited support. What's contained in the memo are accusations the FBI sought (and obtained) FISA warrants to surveill one-time Trump advisor Carter Page. The FBI -- according to the memo -- used the dubious Christopher Steele dossier to buttress its allegations. It apparently continued to do so even after it knew the Steele dossier had been paid for by the Democratic National Committee.The memo notes this interception was not performed under Title VII, which covers the recently-renewed Section 702 collection powers. This surveillance was performed under Title I -- a more "traditional" FISA process in which the government seeks probable cause-based warrants from the FISA court, much like law enforcement officers seek warrants from magistrate judges.The memo suggests the FBI should have dropped the investigation -- or at least given the FISA court heads up -- once it became apparent the Steele dossier was politically compromised. But the FBI continued to ask for renewals and these requests were approved by law enforcement officials Trump and most of the Republican party no longer care for. The list includes James Comey (fired), Andrew McCabe (resigned), Sally Yates (fired), and Rod Rosenstein (who Trump would apparently like to fire).The memo also points out that Christopher Steele was "terminated" (as a source) by the FBI for disclosing his relationship with the agency to the press. Steele also apparently stated he was very interested in preventing Trump from winning the national election. There's also mention of a conflict of interest: a deputy attorney general who worked with those pursuing an investigation of Carter Page was married to a woman who worked for Fusion GPS, the research group paid by the DNC to dig up dirt on Trump.This all seems very damning at first blush. The Nunes memo is the party's attempt to derail the FBI's ongoing investigation of the Trump campaign and its involvement with Russian meddling in the presidential election. But there's a lot missing from the memo. The facts are cherry-picked to present a very one-sided view of the situation.The rebuttal letter [PDF] from Democratic legislators is similarly one-sided. But adding both together, you can almost assemble a complete picture of the FBI's actions. The rebuttal points out Christopher Steele had no idea who was funding his research beyond Fusion GPS. It also points out the dirt-digging mission was originally commissioned by the Washington Free Beacon, a right-leaning DC press entity.It also points out something about the paperwork needed to request a FISA warrant. To secure a renewal, the FBI would have to show it had obtained evidence of value with the previous warrant. If it can't, it's unlikely the renewal request would be approved by FBI directors and/or US attorneys general. The multiple renewals suggest the FBI had actually obtained enough evidence of Carter Page's illicit dealings with the Russians to sustain an ongoing investigation.Beyond that, there's the fact that Devin Nunes -- despite spending days threatening to release this "damning" memo -- never bothered to view the original documents underlying his assertions of FBI bias. In an interview with Fox News after the memo's release, Nunes admitted he had not read the FBI's warrant applications. So, the assertions are being made with very limited info. Nunes apparently heard the Steele dossier was involved and that was all he needed to compile a list of reasons to fire current Trump nemesis Robert Mueller... disguised as a complaint about improper surveillance.It's this complaint about abuse of surveillance powers that really chafes. Nunes throttled attempts at Section 702 reform last month and now wants to express his concerns that the FBI and FISA court may not be protecting Americans quite as well as they should. Marcy Wheeler has a long, righteously angry piece at Huffington Post detailing the rank hypocrisy of Nunes' self-serving memo.
Push Resumes For An EU Google Tax, With The Bulgarian Government Leading The Way
When an idea fails, legislators resurrect it. The problem must not be with the idea, they reason. It must be with the implementation. So it goes in Europe, where the Bulgarian government is trying to push an idea that has demonstrably failed elsewhere on the continent.
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, we've been running a series of posts dealing with discussion moderation, which garnered our top comments on both sides. For insightful, the first place winner is an anonymous commenter taking the opportunity to give Techdirt a tip of the hat:
This Week In Techdirt History: January 28th - February 3rd
Five Years AgoThis week in 2013, something that's now the norm was fresh and surprising: Netflix released the entire season of its new show House of Cards at once. Something less pleasant was born the same week, with the W3C's first official mention of adding DRM to HTML5. We also saw Alan Cooper sue John Steele and Prenda Law, leading to a bit of a scramble by everyone's favorite law firm. Meanwhile, this was the week that the DMCA exemption for phone unlocking was eliminated, and the legal battle over Barbie and Bratz (the subject of a recent episode of our podcast) finally came to an end.Ten Years AgoThere was lots of copyright back-and-forth this week in 2008, with U2's manager jumping on the "make the internet pay us!" bandwagon, a fresh flare-up over the copyright status of jokes, an EU court telling ISPs they don't have to hand over downloader names, Swiss officials pushing back against the aggressive tactics of anti-piracy groups, and a judge telling the RIAA (which had recently struggled to explain exactly why copyright damages need to be higher) that it should be fined for bundling downloading lawsuits. Meanwhile, as had been expected, Swedish prosecutors caved to US pressure and took action against The Pirate Bay.Fifteen Years AgoThis week in 2003, Kazaa pre-empted the heated race to kill it in the music industry by filing a lawsuit against record labels for misusing their copyrights. Declan McCullough was musing about the scary possibility of the DOJ going after file sharers as felons, Business Week was pushing the ol' "don't litigate, educate" line on piracy (which is half right), and record stores were trying to save their future by teaming up with digital distributors. Telemarketers were suing the FTC in an attempt to block its proposed do-not-call list, an internet cafe in the UK was found guilty of piracy, and the format war for the future of disc-bound music was raging despite nobody caring.
Hong Kong's Top Cop Wants To Make It Illegal To Insult Police Officers
The Blue Lives Matter movement has traveled overseas. Here in the US, we've seen various attempts to criminalize sassing cops, although none of those appear to be working quite as well as those already protected by a raft of extra rights would like. Meanwhile, we had Spain lining itself up for police statesmanship by making it a criminal offense to disrespect police officers.Over in Hong Kong, the police chief -- while still debating whether or not he should offer an apology for his officers' beating of bystanders during a 2014 pro-democracy protest -- has thrown his weight behind criminalization of insults directed at officers.
Come Witness The Commentators That Help The NFL Fool The Public About Its 'Super Bowl' Trademark Rights
The Super Bowl is here and this Sunday many of us will bear witness to the spectacle that is million dollar advertising spots mashed together over four hours with a little bit of football thrown in for intermissions. As we've discussed before, this orgy of revenue for the NFL is, in some part, propagated by the NFL's never ending lies about just how much protection the trademark it holds on the term "Super Bowl" provides. While the league obviously does have some rights due to the trademark, it often enjoys playing make believe that its rights extend to essential control over the phrase on all mediums and by all persons for all commercial or private interests. This, of course, is not true, and yet a good percentage of the public believes these lies.Why? The NFL, pantheon of sports power though it may be, is not so strong as to be able to single handedly confuse millions of people into thinking they can't talk about a real life event whenever they want. No, the NFL has been helped along in this by members of the media who repeat these lies, often in very subtle ways. Ron Coleman of the Likelihood Of Confusion site has a nice write up publicly shaming a number of these media members, including Lexology's Mitchell Stabbe.
International Inconsistencies In Copyright: Why It's Hard To Know What's Really Available To The Public
Today, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that are being discussed at this event. We've published a bunch of essays this week from the conference, and will continue with more next week.Have you ever wondered why it can be hard to find out what some old paintings look like? Why there seem to be so few pictures of artistic works available from many countries even though they're filled with public sculptures and murals? Or why prices for books and movies can be so wildly different in different countries? The answer is that copyright law is different all over the world, and these differences can make figuring out what to do with these works so difficult or risky that most websites are not willing to have them around at all. This essay talks about a few of these works and why they add a major challenge to content moderation online.To begin, Wikipedia and the Wikimedia Foundation that hosts it have a mission to host freely available educational content, which means that one of the areas that comes up for us quite often when we receive content moderation requests is whether something is truly free or not. This can come up in a bunch of different ways, and I'd like to talk about a few of them, and why they make it quite difficult to figure out what's really available to the public and what's not.The first one is old pictures and manuscripts. It's generally accepted that if a work was published before 1923, then it's old enough that the author's rights have expired and the ability to freely copy, share, and remix the work shouldn't be limited by the law anymore. But that raises a couple questions. First, how do you know when something was published, especially back then? There's a whole swath of old pictures and writings that were prepared before 1923, but may have never been published at all until later, which then requires figuring out a different timing scheme or figuring out when the work was published: a sometimes very difficult affair due to records lost during the World Wars and various upheaval in the world over the last century. For just one example, a dispute about an old passport photo recently came down to whether it was taken in Egypt or Syria during a time when those national borders were very fluid. If it had been in Egypt, it would have been given U.S. copyright and protected because it was after 1923, but if it had been in Syria at the time, it would not have been protected because that country wasn't extended recognition for copyrights at the time.A second example is works from countries with broad moral rights. All the works on Wikimedia projects that were made recently are dedicated by their authors to the public domain or licensed under free culture licenses like Creative Commons. However, these sorts of promises only work in some countries. There are international copyright treaties that cover a certain agreed-upon set of protections for every country, but many countries add additional rights on top of the treaties such as what are called moral rights. Moral rights in many countries give the creator the power to rescind a license and they cannot give up that power no matter how hard they try. It ends up looking something like this: "I promise that you can use my work forever as long as you give me attribution, and anyone else can reuse it too, and I want this to be irrevocable so that the public can benefit without having to come back to me." And then a couple years later, it's "oh, sorry, I've decided that I changed my mind, just forget my earlier promise." In some places that works, and because of that possibility, people can't always be sure that the creative works being offered to them are reliable.A third problem is pictures of artwork. This one applies, though a bit differently, to both new and old works. With new photos of old works, it's a question of creativity. Copyrights are designed to reward people for their original creativity: you don't get a new "life of the author plus 70 years" of protection for making a photocopy. But in some places, they again go past the international rights agreed upon in the copyright treaties and add extra protections. In this case, many countries offer a couple decades worth of protection for taking a straight on 2-D photograph of an old work of art. The Wikimedia Foundation is currently in a lawsuit about this with the Reiss Engelhorn Museum in Germany, where the museum argues that photographs on its website are copyrighted even though the only thing shown in the photo is a public domain painting such as a portrait of Richard Wagner.The other variation of problems with photos of art is photographs of more recent works out in the public. Did you know that in many places if you're walking in a park and you take a snapshot with a statue in it, you're actually violating someone's copyright? This varies from country to country: some places allow you to photograph artistic buildings but not sculptures or mosaics, other places let you take photographs of anything out in public, and others prohibit photographs of anything artistic even if it's displayed in public. This issue, called freedom of panorama, is one that many Wikimedians are concerned over, and is currently being debated in the European Parliament, but in the meantime can lead to very confused expectations about what sorts of things can be photographed as the answer varies depending on where you are.The difficulty around so many of these types of works is that they put the public at risk. The works on Wikipedia, and works in the public domain or that are freely licensed more generally are supposed to be free for everyone to use. Copyright is built on a balance that rewards authors and artists for their creativity by letting them have a monopoly on who uses their works and how they're used. But the system has become so strong that even when the monopoly has expired and the creator is long dead, or when the creator wants to give their work away for free, it's extremely difficult for the public to understand what is usable and to use it safely and freely as intended. The public always has to be worried that old records might not be quite accurate, or that creators in many places will simply change their minds no matter how many promises and assurances they provide that they want to make something available for the public good.These kinds of difficulties are one of the reasons why the Wikimedia Foundation made the decision to defer to the volunteer editors. The Wikimedia movement consists of volunteers from all over the world, and they get to decide on the rules for each different language of Wikipedia. This often helps to avoid conflicts, such as many languages spoken primarily in Europe choosing not to host images that might be allowed under U.S. fair use law, whereas English language does allow fair use images. It's difficult for a small company to know all the rules in hundreds of different countries, but individual volunteers from different places can often catch issues and resolve them even where the legal requirements are murky. As just one example, this has actually led Wikimedia volunteers who deal with photographs to have one of the most detailed policies for photographs of people of any website (and better than many law textbooks). In turn, volunteers handling so many of the content issues means that the Foundation is able to dedicate time from our lawyers to help clarify situations that do present a conflict such as the Reiss Engelhorn case of freedom or panorama issues already mentioned.That said, even with efforts from many dedicated people around the world, issues like these international conflicts leave some amount of confusion and conflict. These issues often don't receive as much attention because they're not as large as, say, problems with pirated movies, but they present a more pernicious threat. As companies shy away from dealing with works that might be difficult to research or uncertain as to how the law applies to them, the public domain slowly shrinks over time and we are all poorer for it.Jacob Rogers is Legal Counsel for the Wikimedia Foundation.
Israeli Music Fans Sue Two New Zealanders For Convincing Lorde To Cancel Her Israeli Concert
Let's start this post off this way: the whole "BDS" movement and questions about Israel are controversial and people have very, very strong opinions. This post is not about that, and I have no interest in discussing anyone's views on Israel or the BDS movement. This post is about free speech, so if you want to whine or complain about Israel or the BDS movement, find somewhere else to do it. This is not the post for you. This post should be worth discussing on the points in the post itself, and not as part of the larger debate about Israel.Back in December, the very popular New Zealand singer Lorde announced that she was cancelling a concert in Israel after receiving requests to do so from some of her fans who support boycotting Israel.
Bar Complaint Filed Against Lawyers Who Participated In Bogus Lawsuits Targeting Fake Defendants
The reputation management tactic of filing bogus defamation lawsuits may be slowly coming to an end, but there will be a whole lot of reputational damage to be spread among those involved by the time all is said and done.Richart Ruddie, proprietor of Profile Defenders, filed several lawsuits in multiple states fraudulently seeking court orders for URL delistings. The lawsuits featured fake plaintiffs, fake defendants, and fake admissions of guilt from the fake defendants. Some judges issued judgments without a second thought. Others had second thoughts but they were identical to their first one. And some found enough evidence of fraud to pass everything on to the US Attorney's office.But Ruddie couldn't do all of this himself. He needed lawyers. And now those lawyers are facing a bar complaint for assisting Ruddie (and possibly others) in fraudulent behavior. Eugene Volokh has more details at the relocated (and paywall-free!) Volokh Conspiracy.
Daily Deal: Paww WaveSound 2.1 Low Latency Bluetooth 4.2 Over Ear Headphones
The $70 WaveSound 2.1 headphones give you the freedom to listen to your music when, where, and how you want. The built-in Bluetooth 4.2 is 250% faster and has 10x more bandwidth than Bluetooth 4.0, allowing you to connect to your devices confidently and quickly, making your audio experiences seamless. And if wired listening is more your style, you can do that, too. With 16 hours of playtime, you'll be set for most of the day.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Appeals Court Makes A Mess Of Copyright Law Concerning ISPs And Safe Harbors
We've been following the BMG v. Cox lawsuit from the very beginning, through all its very odd twists and turns, including having a judge in the district court, Liam O'Grady, who made it quite clear that he didn't much care about the internet, and didn't see why it was a problem if people lost their internet access completely based on merely a few allegations of copyright infringement. The 4th Circuit appeals court has now overturned the lower court ruling and sent the case back to the district court for a do-over. While the initial decision was awful (as we discuss below), this new ruling makes a huge mess out of copyright law and will have serious, dangerous, and long-lasting consequences for the internet as a whole.If you don't recall, the case involved BMG suing Cox Communications, though much of the case really hinged on the actions of another company, Rightscorp, who has been trying (and mostly failing) to build a business model around a form of mild copyright trolling. Rather than the aggressive "sue 'em and settle," strategy employed by others, Rightscorp would send DMCA takedowns to ISPs, with a settlement offer, and hope that the ISPs would pass those notices on to subscribers accused of infringing.Cox Communications -- a decently large broadband provider -- made it quite clear to Rightscorp that it did not intend to be a part of its business model, and refused to pass on the settlement letters. Rightscorp started flooding Cox with notices... to the point that Cox decided to effectively just trash all inbound messages from Rightscorp as spam. After all this happened, Rightscorp signed BMG as a client, and then sued Cox, claiming the ISP had violated the DMCA by not kicking users off. What came out during the trial was that Cox basically had a "thirteen strike" policy (some of the earlier strikes involved stopping internet access until you read something and clicked something -- or requiring the user to call in to Cox).What is rarely noted, of course, is that Cox was basically one of the only ISPs to actually have any termination policy for people who used their connections for copyright infringement. Most ISPs (and most copyright lawyers not working for legacy industry interests) believed that the DMCA's requirement for a "repeat infringer policy" was not directed at access providers, but at content hosts, where the issues are much clearer. However, BMG claimed here that Cox violated the DMCA's requirement for a repeat infringer policy -- and the court agreed. Cox was, partly, undone by some pretty bad behavior behind the scenes, that seemed to tar it as a "bad actor" and obscure the underlying copyright issues. Even more ridiculous was that Judge O'Grady later argued that Cox should pay the other side's legal fees, because even bringing up the idea that it was protected by safe harbors was "objectively unreasonable." That, itself, was crazy, since tons of copyright experts actually think Cox was correct.On appeal there were two key issues raised by Cox. The main issue was to argue that O'Grady was incorrect and that the DMCA safe harbors covered Cox. The second pertained to the specific jury instructions given to the jurors in the case. The new ruling unfortunately upholds the ruling that Cox is not covered by the DMCA's safe harbors, but does say that the instructions given to the jury were incorrect. Of course, it then proceeds to make a huge muddle of what copyright law says in the process. But we'll get to that.The Impact on Safe HarborsLet's start with the safe harbors part of the ruling, which is what most people are focusing on. As the court notes, Cox (correctly, in my view), pointed out that even if it was subject to a repeat infringer policy, that should cover actual infringers, not just those accused of infringing. After all, it's not like there aren't tons upon tons of examples of false copyright infringement accusations making the rounds, and that's doubly true when it comes to trolling operations. If the rule is that people can lose all access to the internet based solely on unproven accusations of infringement, that seems like a huge problem. But, here, the court says that it's the correct way to read the statute:
Verizon Folds To Government Pressure To Blacklist Huawei Without A Shred Of Public Evidence
Earlier this month, AT&T cancelled a smartphone sales agreement with Huawei just moments before it was to be unveiled at CES. Why? Several members of the Senate and House Intelligence Committees had crafted an unpublished memo claiming that Huawei was spying for the Chinese government, and pressured both the FCC and carriers to blacklist the company. AT&T, a stalwart partner in the United States' own surveillance apparatus was quick to comply, in part because it's attempting to get regulators to sign off on its $86 billion acquisition of media juggernaut Time Warner.But Verizon has also now scrapped its own plans to sell the company's smartphones based on those same ambiguous concerns:
Ohio Appeals Court Says Speed Trap Town Must Pay Back $3 Million In Unconstitutional Speed Camera Tickets
Drivers sent tickets by New Miami, Ohio speed cameras will be getting a refund. The state appeals court has upheld the ruling handed down by the lower court last spring. At stake is $3 million in fines, illegally obtained by the town.
Atari Gets The Settlement It Was Surely Fishing For Over An Homage To 'Breakout' In KitKat Commercial
As readers of this site will know, once-venerated gaming giant Atari long ago reduced itself to an intellectual property troll mostly seeking to siphon money away from companies that actually produce things. The fall of one of gamings historical players is both disappointing and sad, given just how much love and nostalgia there is for its classic games. It was just that nostalgia that likely led Nestle to craft an advertisement in Europe encouraging buyers of candy to "breakout" KitKats and included imagery of the candy replacing a simulation of a game of Breakout. For this, Atari sued over both trademark and copyright infringement, stating for the latter claim that the video reproduction of a mock-game that kind of looks like Breakout constituted copyright infringement.As we discussed in that original post, both claims are patently absurd. Nestle and Atari are not competitors and anyone with a working frontal lobe will understand that the ad was a mere homage to a classic game made decades ago. If the products aren't competing, and if there is no real potential for public confusion, there is not trademark infringement. As for the copyright claim, the expression in the homage was markedly different from Atari's original game, and there's that little fact that Nestle didn't actually make a game to begin with. They mocked up a video. Nothing in there is copyright infringement.It was enough that I'm certain some of our readers wondered why Atari would do something like this to begin with. The answer is the recent news that a settlement has been reached in the lawsuit, and it was almost certainly that settlement that Atari was fishing for all along.
California's Net Neutrality Law Takes Another Step Forward
In the wake of the FCC's repeal of federal net neutrality rules, countless states have rushed to create their own protections. Numerous states from Rhode Island to Washington State are considering new net neutrality legislation, while other states (like Wyoming and New York) are modifying state procurement policies to block net neutrality violating ISPs from securing state contracts. These states are proceeding with these efforts despite an FCC attempt to "pre-empt" (read: ban) states from stepping in and protecting consumers, something directly lobbied for by both Verizon and Comcast.One of two California net neutrality laws, SB-460, passed 21-12 by the state Senate, and will now head to the state Assembly:
Implementing Transparency About Content Moderation
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.When people express free speech-based concerns about content removal by platforms, one type of suggestion they generally offer is -- increase transparency. Tell us (on a website or in a report or with an informative "tombstone" left at the URL where the content used to be) details about what content was removed. This could happen lots of different ways, voluntarily or not, by law or industry standard or social norms. The content may come down, but at least we'll have a record and some insight into what happened, at whose request, and why.In light of public discussions about platform transparency, especially in the past year, this post offers a few practical thoughts about transparency by online UGC platforms. First, some of the challenges platforms face in figuring out how to be transparent with users and the public about their content moderation processes. Second, the industry practice of transparency reports and what might be done to make them as useful as possible.Content Moderation Processes & DecisionsSo, why not be radically transparent and say everything? Especially if you're providing a service used by a substantial chunk of the public and have nothing to hide. Just post all takedown requests in their entirety and all correspondence with people seeking asking you to modify or remove content. The best place to start answering this is by mentioning some of the incentives a platform faces here and the reasonable reasons they might say less than everything (leaving aside self-interested reasons like avoiding outside scrutiny and saving embarrassment over shortcomings such as arguably inconsistent application of moderation rules or a deficient process for creating them).First, transparency is sometimes in tension with the privacy of not just users of a service, but any person who winds up the subject of UGC. Just as the public, users, regulators, and academics are asking platforms to increase transparency, the same groups have made equally clear that platforms should take people's privacy rights seriously. The legal and public relations risks of sharing information in a way that abridges someone's privacy are often uncertain and potentially large. This does not mean they cannot be outweighed by transparency values, but I think in order to weight them properly, this tension has to be acknowledged and thought through. In particular, however anonymized a given data set is, the risks of de-anonymization increase with time as better technologies come to exist. Today's anonymous data set could easily be tomorrow's repository of personally identifiable information, and platforms are acting reasonably when choosing to safeguard these future and contingent rights for people by sometimes erring on the side of opacity around anything that touches user information.Second, in some cases, publicizing detailed information about a particular moderation decision risks maintaining or intensifying the harm that moderation was intended to stop or lessen. If a piece of content is removed because it violates someone's privacy, then publicizing information about that takedown or redaction risks continuing the harm if the record is no carefully worded to exclude the private information. Or, in cases of harassment, it may provide information to the harasser or the public (or the harasser's followers, who might choose to join in) for that harassment to continue. In some cases, the information can be described at a sufficiently high level of generality to avoid harm (e.g., "a private person's home address was published and removed" or "pictures of a journalist's children were posted and removed). In other cases, it may be hard or impossible (e.g., "an executive at small company X was accused of embezzling by an anonymous user"). Of course, generalizing at too high a level may frustrate those seeking greater transparency as not much better than not releasing the information at all.Finally, in some cases publicizing the details a moderation team's script or playbook can make the platform's rules easier to break or hack by bad faith actors. I don't think these are sufficient reason to perpetuate existing confidentiality norms. But, if platform companies are being asked or ordered to increase the amount of public information about content moderation and plan to do so, they may as well try to proceed in a way that will account for these issues.Transparency ReportsShort of the granular information discussed above, many UGC platforms already issue regular transparency reports. Increasing expectations or commitments about what should be included in transparency reports could wind up an important way to move confidentiality norms while also ensuring that the information released is structured and meaningful.With some variation, I've found that the majority of UGC platform transparency reports cover information across two axes. The two main types of requests are to remove/alter content and information requests. And then, within each of those categories, whether a given request comes from a private person or a government actor. A greater push for transparency might mean adding categories to these reports with more detail about the content of requests and the procedural steps taken along the way rather than just the usually binary output of "action taken" or "no action taken" that one finds in these reports, such as the law or platform rule that is the basis for removal, more detail about what relevant information was taken into account (such as, "this post was especially newsworthy because it said ..." or "this person has been connected with hate speech on [other platform]"). As pressure to filter or proactively filter platform content increases from legislators from places like Europe and Hollywood, we may want to add a category for removals that happened based on a content platform's own proactive efforts,, rather than a complaint.Nevertheless, transparency reports as they are currently done raise questions about how to meaningfully interpret them and what can be done to improve their usefulness.A key question I think we need to address moving forward: are the various platform companies' transparency reports apple-to-apples in their categories? Being able to someday answer yes would involve greater consistency in terms by industry (e.g, are they using similar terms to mean similar things, like "hate speech" or "doxxing," irrespective of their potentially differing policies about those types of content).Relatedly, is there a consistent framework for classifying and coding requests received by each company. Doing more to articulate and standardize coding though maybe unexciting will be crucial infrastructure for providing meaningful classes and denominators for what types of actions people are asking platform companies to take and on what ground. Questions here include, is there relative consistency in how they each code a particular request or type of action taken in response? For example, a demand email with some elements of a DMCA notice, a threat of suit based on trademark infringement, an allegation of violation of rules/TOS based on harassment, and an allegation that the poster has action in breach of a private confidentiality agreement? What if a user makes a modification to their content of their own volition based on a DMCA or other request? What is a DMCA notice is received for one copy of a work posted by a user account, but in investigating, a content moderator finds 10 more works that they believe should be taken down based on their subjective judgment of the existence of possible red flag knowledge?Another question is how to ensure the universe of reporting entities is complete. Are we missing some types of companies and as a result lacking information on what is out there? The first type that comes to mind is nominally traditional online publishers, like the New York Times or Buzzfeed, who also host substantial amounts of UGC, even if it is not their main line of business. Although these companies focus on their identity as publishers, they are also platforms for their own and others' content. (Section 3 of the Times' Terms of Service) spells out its UGC policy, and Buzzfeed's Community Brand Guidelines explain things such as the fact that a post with "an overt political or commercial agenda" will likely be deleted).Should the Times publish a report on which comments they remove, how many, and why? Should they provide (voluntarily, by virtue of industry best practices, or by legal obligation) the same level of transparency major platforms already provide? If not, why not? (Another interesting question – based on what we've learned about the benefits of transparency into the processes by which online, content is published or removed, should publisher/platforms perhaps be encouraged to also provide greater transparency into non-UGC content that is removed, altered, or never published by virtue what has traditionally been considered editorial purview, such as a controversial story that is spiked at the last minute due to a legal threat or factual allegations removed from a story for the same reason? And over time, we can expect that more companies may exist that cannot be strictly classified as publisher or platform, but which should nevertheless be expected to be transparent about its content practices.) Without thinking through these question, we may lack a full data set of online expression and lose our ability to aggregate useful information about practices across types of content environments before we've started.Alex Feerst is the Head of Legal at Medium
Court Dismisses -- For A Second Time -- Lawsuit Seeking To Hold Facebook Responsible For Acts Of Terrorism
Back in May of last year, a New York federal court tossed two lawsuits from plaintiffs attempting to hold social media companies responsible for terrorist attacks. Cohen v. Facebook and Force v. Facebook were both booted for failing to state a claim, pointing out the obvious: the fact that terrorists use social media to recruit and communicate does not somehow turn social media platforms into material support for terrorism.Both lawsuits applied novel legal theories to internet communications in hopes of dodging the obvious problems posed by Section 230 immunity. None of those were entertained by the New York court, resulting in dismissals without prejudice for both cases.Rather than kick their case up the ladder to the Appeals Court, the Force plaintiffs tried to get a second swing in for free. The plaintiffs filed two motions -- one asking the judge to reconsider its dismissal ruling and the other for permission to file a second amended complaint.As Eric Goldman points out on his blog, the judge's decision to address both of these filings at once makes for difficult reading. The end result is a denial of both motions, but the trip there is bumpy and somewhat incoherent.Once the court moves past the plaintiffs' attempt to skirt Section 230 by re-imagining its lawsuit as an extraterritorial claim, it gets directly to the matter at hand: the application of Section 230 immunity to the lawsuit's claims. The plaintiffs performed a hasty re-imagining of their arguments in hopes of dodging the inevitable immunity defense, but the judge has no time for bogus arguments raised hastily in the face of dismissal.From the decision [PDF]:
Virginia Politicians Looks To Tax Speech In The Form Of Porn In The Name Of Stemming Human Trafficking
Every once in a while, you'll come across stories about one government or another looking to censor or discourage pornography online, typically through outright censorship or some sort of taxation. While most of these stories come from countries that have religious reasoning behind censorship of speech, more secular countries in Europe have also entertained the idea of a tax or license for viewing naughty things online. Occasionally, a state or local government here in America will try something similar before those efforts run face first into the First Amendment. It should be noted, however, that any and all implementations of this type of censorship or taxation of speech have failed spectacularly with a truly obscene amount of collateral damage as a result. Not that any of that keeps some politicians from trying, it seems.The latest evidence of that unfortunate persistence would be from the great state of Virginia, where the General Assembly will be entertaining legislation to make the state the toll booth operators of internet porn. The bill (which you can see here) was introduced by Viriginia House member David LaRock (and there's a Senate version introduced by State Senator Richard Black).
Daily Deal: Amazon Web Services Certification Training Mega Bundle
With 8 courses (50+ Hours), the Amazon Web Services Certification Training Mega Bundle is your one-stop to learn all about cloud computing. The courses cover S3, Route 53, EC2, VPC, Lambda and more. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS bundle is on sale for $69.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Theresa May Again Demands Tech Companies Do More To Right The World's Social Media Wrongs
In the face of "extremist" content and other internet nasties, British PM Theresa May keeps doing something. That something is telling social media companies to do something. Move fast and break speech. Nerd harder. Do whatever isn't working well already, but with more people and processing power.May has been shifting her anti-speech, anti-social media tirades towards the Orwellian in recent months. Her speeches and platform stances have tried to make direct government control of internet communications sound like a gift to the unwashed masses. May's desire to bend US social media companies to the UK's laws has been presented as nothing more than as a "balancing" of freedom of speech against some imagined right to go through life without being overly troubled by social media posts.Then there's the terrorism. Terrorists use social media platforms to connect with like-minded people. May would like this to stop. She's not sure how this should be accomplished but she's completely certain smart people at tech companies could bring an end to world terrorism with a couple of well-placed filters. So sure of this is May that she wants "extremist" content classified, located, and removed within two hours of its posting.May's crusade against logic and reality continues with her comments at the Davos Conference. Her planned speech/presentation contains more of her predictable demand that everyone who isn't a UK government agency needs to start doing things better and faster.Although she is expected to praise the potential of technology to "transform lives", she will also call on social media companies to do much more to stop allowing content that promotes terror, extremism and child abuse.
Apple, Verizon Continue to Lobby Against The Right To Repair Your Own Devices
A few years back, frustration at John Deere's draconian tractor DRM resulted in a grassroots tech movement. John Deere's decision to implement a lockdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM and the company's EULA prohibited the lion-share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.The John Deere fiasco resulted in the push for a new "right to repair" law in Nebraska. This push then quickly spread to multiple other states, driven in part by consumer repair monopolization efforts by other companies including Apple, Sony and Microsoft. Lobbyists for these companies quickly got to work trying to claim that by allowing consumers to repair products they own (or take them to third-party repair shops) they were endangering public safety. Apple went so far as to argue that if Nebraska passed such a law, it would become a dangerous "mecca for hackers" and other rabble rousers.In the wake of Apple's recent iPhone battery PR kerfuffle (in which it claimed it throttled the performance of older iPhones to protect device integrity from dwindling battery performance), longer than normal repair waits have resulted in renewed interest in such laws. A new bill that would make it easier for consumers to repair their own electronics or utilize third-party repair shops is quickly winding its way through the Washington state legislature. That bill would not only protect the consumers' right to repair, but prevent the use of batteries that are difficult or impossible to replace:
...323324325326327328329330331332...