|
by Leigh Beadon on (#3FTM5)
A couple of weeks ago, Mike was in Washington, DC for the State Of The Net conference, where he participated in a panel called Internet Speech: Truth, Trust, Transparency & Tribalism. For this week's podcast, we've got the audio from that conversation with all sorts of interesting ideas about how people are dealing with fake news, trolls, propaganda and more.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
Techdirt
| Link | https://www.techdirt.com/ |
| Feed | https://www.techdirt.com/techdirt_rss.xml |
| Updated | 2025-11-21 09:15 |
|
by Karl Bode on (#3FTDF)
The rise of cryptocurrency mining software like Coinhive has been a decidedly double-edged sword. While many websites have begun exploring cryptocurrency mining as a way to generate some additional revenue, several have run into problems if they fail to warn visitors that their CPU cycles are being co-opted in such a fashion. That has resulted in numerous websites like The Pirate Bay being forced to back away from the software after poor implementation (and zero transparency) resulted in frustrated users who say the software gobbled upwards of 85% of their available CPU processing power without their knowledge or consent.But websites that don't inform users this mining is happening are just one part of an emerging problem. Hackers have also taken to using malware to embed the mining software into websites whose owners aren't aware that their sites have been hijacked to make somebody else an extra buck. Politifact was one of several websites that recently had to admit its website was compromised with cryptocurrency-mining malware without their knowledge. Showtime was also forced to acknowledge (barely) that websites on two different Showtime domains had been compromised and infected with Coinhive-embedded malware.While Bloomberg this week proclaimed that governments should really get behind this whole cryptocurrency mining thing, the reality is that numerous governments already have -- just not in the way they might have intended. Security researcher Scott Helme this week discovered that more than 4,000 U.S. and UK government websites -- including the US court system website -- have been infected with cryptocurrency mining malware, a number that's sure to only balloon.As Helme notes, attackers don't need to even attack each website individually, as they've found a way to compromise shared resources like Text Help, whose modified script files were then loaded by thousands of websites at a pop:
|
|
by Mike Masnick on (#3FT8S)
Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »Last week we announced our new site EveryoneCreates.org, featuring stories from many different creators of music, books, movies and more about how important the internet and fair use have been to their creations. As we noted, the reason for the site is that the legacy copyright gatekeepers at the MPAA and the RIAA have been using the Trump-requested NAFTA renegotiations to try to undermine both fair use and internet safe harbors by positing a totally false narrative that the internet has somehow "harmed" content creators.Yet, as we know, and as the stories from various artists show, nothing is further from the truth. For most artists and content creators, the internet has been a huge boon. It has helped them create new art, share it and distribute it to other people, build a fan base and connect with them, and make money selling either their work or related products and services. As we've discussed before, in the past, for most artists, if you did not find a giant gatekeeper to take you on, you were completely out of the market. There was very little "long tail" to be found in most creative industries, because you either were "chosen" by a gatekeeper or you went home and did something else. But the internet has changed that. It has allowed people to go directly to their audiences, or to partner with platforms that help anyone create, distribute, promote and monetize. Indeed, the internet has undoubtedly helped everyone reading this to create art -- whether for profit or just for fun. And if that's the case with you, please share your story.But it is worth taking a step back and asking an even larger question: how the hell did we get here? How did we get to the point that the MPAA and the RIAA are using NAFTA negotiations to try to undermine the internet. Rest assured: there's a long, long history at play here, and it's important to learn about it. The idea that you can or should regulate the internet or intellectual property in trade agreements should seem strange to most people -- especially as most trade agreements these days are about increasing free trade by removing barriers to trade, and copyright by its very nature is mercantile-style trade protectionism that places artificial limits and costs on trade that might otherwise be cheaper.An excellent history on this topic comes from the aptly named 2002 book Information Feudalism: Who Owns the Knowledge Economy by Peter Drahos and John Braithweaite. It tells the story of how a concerted effort by legacy copyright maximalist organizations laid the groundwork for making sure that copyrights and patents were always included in trade agreements, by getting them in as a key part of the World Trade Organization and by the creation of TRIPS -- Trade-Related Aspects of Intellectual Property Rights. The book details how the legacy industries turned "intellectual property" from a question of benefiting the public to a solely commercial arena of corporate ownership and trade.Once that was in place, these same industries wasted little time in exploiting the reframing of issues around copyright and patents. Famously, the DMCA itself was created in this manner. The record labels and movie studios had a friend in the Clinton White House in Bruce Lehman, who wrote a white paper in 1995 requesting draconian changes to copyright law targeting the internet. However, he found little support for it in Congress. Five years ago, Lehman himself admitted that when Congress refused to act he did "an end-run around Congress" by going to Geneva and pushing for a trade agreement via the World Intellectual Property Organization (WIPO) which required DMCA-like copyright rules.With that treaty in hand, Lehman and his Hollywood friends came back to Congress, insisting that our "international obligations" now required Congress to create and pass the DMCA, or we'd suddenly face all sorts of trade and diplomatic problems for failing to live up to those "international obligations" that they themselves had put into the trade agreement. Indeed, ever since then, nearly every international trade agreement has included some crazy provisions related to copyright and patents and other IP rights -- all designed to effectively launder these laws through the highly opaque international trade negotiation process, and then insist that legislatures in various countries simply must ratchet up their laws to meet those obligations.Given all that, there's at least some irony in the fact that these same groups that forced the DMCA on Congress through an international trade agreement back in the mid-1990s are now trying to use a different trade agreement 20 years later to force changes to that very same law (and others). Once again, the process is opaque. And once again, the industry is well connected and represented on a variety of the "Industry Trade Advisory Committees" (ITACs), giving them much greater access to the details of the negotiations while the public is kept in the dark.But the history here is clear. Moving copyright into trade agreements was a purposeful move, pushed for by legacy industries so they could promote their favored protectionist laws around the globe, in part by moving them away from being designed for the public's benefit and towards a world in which information and knowledge was considered to be privatized, owned, and locked up by default. It ignored the fact that, often, the public can benefit the most when information is open and widely shared. And, decades later, we're still dealing with the fallout from these bad decisions.And that's why it's so important for policy makers to understand that it's complete hogwash to argue that the RIAA and MPAA are "representing artists" in trying to undermine the internet this way. Most artists recognize that the internet and various platforms are a key part of their ability to create, distribute, share, and support their artwork these days -- and they are not being represented at the NAFTA negotiating table.Share your story at EveryoneCreates.org to let policymakers know how important an open internet and fair use is to your own creativity.
|
|
by Daily Deal on (#3FT8T)
Although it can be confusing and overwhelming, it's absolutely essential that you have at least a basic knowledge of finance. Whether you're pursuing a career in the finance industry or you just need a solid refresher on important concepts, the eduCBA Finance and Investments Bundle can help you out. With access to 700+ courses, you'll develop an understanding of investment banking, financial modeling, project finance, private equity, accounting, and more. This bundle is on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Tim Cushing on (#3FT1S)
In recent months, both Deputy Attorney General Rod Rosenstein and FBI Director Christopher Wray have been calling for holes in encryption law enforcement can drive a warrant through. Both have no idea how this can be accomplished, but both are reasonably sure tech companies can figure it out for them. And if some sort of key escrow makes encryption less secure than it is now, so be it. Whatever minimal gains in access law enforcement obtains will apparently offset the damage done by key leaks or criminal exploitation of a deliberately-weakened system.Cryptography expert Riana Pfefferkorn has released a white paper [PDF] examining the feasibility of the vague requests made by Rosenstein and Wray. Their preferred term is "responsible encryption" -- a term that allows them to step around landmines like "encryption backdoors" or "we're making encryption worse for everyone!" Her paper shows "responsible encryption" is anything but. And, even if implemented, it will result in far less access (and far more nefarious exploitation) than Rosenstein and Wray think.The first thing the paper does is try to pin down exactly what it is these two officials want -- easier said than done because neither official has the technical chops to concisely describe their preferred solutions. Nor do they have any technical experts on board to help guide them to their envisioned solution. (The latter is easily explained by the fact that no expert on cryptography has ever promoted the idea that encryption can remain secure after drilling holes in it at the request of law enforcement.)If you're going to respond to a terrible idea like "responsible encryption," you have to start somewhere. Pfefferkorn starts with an attempt to wrangle vague law enforcement official statements into a usable framework for a reality-based argument.
|
|
by Karl Bode on (#3FSH9)
Given Verizon's long-standing animosity to net neutrality (and openness and healthy competition in general), the company's acquisition of Tumblr created some understandable tension. Tumblr has been on the front lines of net neutrality support since around 2014 or so, with CEO David Karp stating in 2015 that the service wouldn't exist without net neutrality:
|
|
by Tim Cushing on (#3FS4E)
Digital cameras can store a wealth of personal information and yet they're treated as unworthy of extra protection -- both by courts and the camera makers themselves. The encryption that comes baked in on cellphones hasn't even been offered as an option on cameras, despite camera owners being just as interested in protecting their private data as cellphone users are.The Freedom of the Press Foundation sent a letter to major camera manufacturers in December 2016, letting them know filmmakers and journalists would appreciate a little assistance keeping their data out of governments' hands.
|
|
by Timothy Geigner on (#3FRF6)
It's been a minute since we've had to cover some trademark nonsense in the beer industry. In fact, several recent stories have actually represented what might be mistaken for a clapback on aggressive trademark protectionism in the alcohol space. But, like all great things, it just couldn't last. The specific tomfoolery that has brought reality crashing down on us once again comes out of Iowa, where Confluence Brewing has filed a trademark suit against Confluence On 3rd, which is an apartment complex that does not serve or make beer.
|
|
by Cathy Gellis on (#3FR45)
With the event at Santa Clara earlier this month, and the companion essays published here, we've been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech.Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many "how dare you let terrorists use the Internet" cases that keep getting filed against Internet platforms. While it's problematic that they keep getting filed, they have fortunately not tended to get very far. I say "fortunately," because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms' ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits.A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230's critical statutory protection, by not speaking to it at all it also didn't undermine it, and the fact the court ruled this way isn't actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech.In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform's direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won't throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it.[Side note: the Ninth Circuit originally issued the decision on January 31, but then on 2/2 released an updated version correcting a minor typographical error. The version linked here is the latest and greatest.]
|
|
by Timothy Geigner on (#3FQVH)
We should all know by now that Facebook's reliability to handle copyright takedown requests is... not great. Like far too many internet platforms these days, the site typically puts its thumbs heavily on the scales such that the everyday user gets far less preference than large purported rights holders. I say "purported" because, of course, many bogus takedown requests get issued all the time. It's one of the reasons that relying on these platforms, when they have shown no willingness to have any sort of spine on copyright matters, is such a mistake.But few cases are as egregious as that of Leo Saldanha, a well-known environmental activist in India. When I tell you that Saldanha had a Facebook post taken down over a copyright notice, you must certainly be thinking that it had something to do with environmental activism. Nope! Actually, Saldanha wrote an all-text mini-review of an Indian film, Padmaavat, which was taken down after the distributor for the film claimed the post infringed on its copyrights. Here is the entirety of his post that was taken down.
|
|
by Karl Bode on (#3FQMP)
By now it has been pretty well established that the security and privacy of most "internet of things" devices is decidedly half-assed. Companies are so eager to cash in on the IOT craze, nobody wants to take responsibility for their decision to forget basic security and privacy standards. As a result, we've now got millions of new attack vectors being introduced daily, including easily-hacked "smart" kettles, door locks, refrigerators, power outlets, Barbie dolls, and more. Security experts have warned the check for this dysfunction is coming due, and it could be disastrous.Smart televisions have long been part of this conversation, where security standards and privacy have also taken a back seat to blind gee whizzery. Numerous set vendors have already been caught hoovering up private conversations or transmitting private user data unencrypted to the cloud. One study last year surmised that around 90% of smart televisions can be hacked remotely, something intelligence agencies, private contractors and other hackers are clearly eager to take full advantage of.Consumer Reports this week released a study suggesting that things aren't really improving. The outfit, which is working to expand inclusion of privacy and security in product reviews, studied numerous streaming devices and smart TVs from numerous vendors. What they found is more of the same: companies that don't clearly disclose what consumer data is being collected and sold, aren't adequately encrypting the data they collect, and still don't seem to care that their devices are filled with security holes leaving their customers open to attack.The company was quick to highlight Roku's many smart TVs and streaming devices, and the company's failure to address an unsecured API vulnerability that could allow an attacker access to smart televisions operating on your home network. This is one of several problems that has been bouncing around since at least 2015, notes the report:
|
|
by Tim Cushing on (#3FQGD)
Eric Goldman has come across an amazing pro se lawsuit [PDF] being brought by Nicholas C. Georgalis, an aggrieved social media user who believes he's owed an open platform in perpetuity, no matter what awful things he dumps onto service providers' pages. Oh, and he wants Section 230 immunity declared unconstitutional.Georgalis -- who sidelines as a "professional training professionals" when not filing stupid lawsuits -- is suing Facebook for periodically placing him in social media purgatory after removing posts of his. The lawsuit is heady stuff. And by "heady stuff," I mean we're going to be dealing with a lot of arguments about "sovereign rights" and "common law" and other related asshattery.Here's the opening. And it only gets better/worse from there:
|
|
by Daily Deal on (#3FQDT)
Give your IT career a boost with the Complete 2018 CompTIA Certification Training Bundle. 14 courses cover the most common hardware and software technologies in business, and the skills necessary to support complex IT infrastructures. The courses are designed to help you study for sitting the various CompTIA certification exams. The bundle is on sale for $59.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3FQ84)
Back in December, right before the Waymo/Uber trial was supposed to begin (before it got delayed due to an unexpected bombshell about withholding evidence that... never actually came up at the trial), I had a discussion with another reporter about the case, in which we each expressed our surprise that a settlement hadn't been worked out before going to trial. It seemed as though part of the case was really about the two companies really disliking each other, rather than there being a really strong legal case.A year ago, when the case was filed, I expressed disappointment at seeing Google filing this kind of lawsuit. My concern was mainly over the patent part of the case (which were dropped pretty early on), and the fact that Google, historically, had shied away from suing competitors over patents, tending to mostly use them defensively. But I had concerns about the "trade secrets" parts of the case as well. While there does seem to be fairly clear evidence that Anthony Levandowski -- the ex-Google employee at the heart of the discussion -- did some sketchy things in the process of leaving Google, starting Otto, and quickly selling Otto to Uber, the case still felt a lot like a backdoor attempt to hold back employee mobility.As we've discussed for many years, a huge part of the reason for the success of Silicon Valley in dominating the innovation world has to do with the ease of employee mobility. Repeated studies have shown that the fact that employees can switch jobs easily, or start their own companies easily, is a key factor in driving innovation forward. It's the sharing and interplay of ideas that allows the entire industry to tackle big problems. Individual firms may compete around those big breakthroughs, but it's the combined knowledge, ideas, and perspective sharing that results in the big breakthroughs.And even though that's widely known, tech companies have an unfortunate history of trying to stop employees from going to competitors. While non-competes have been ruled out in California, a few years back there was a big scandal over tech companies having illegal handshake agreements not to poach employees from one another. It was a good thing to see the companies fined for such practices.However, the latest move is to use "trade secrets" claims as way to effectively get the same thing done. The mere threat of lawsuits can stop companies from hiring employees, and can limit an employee's ability to find a new job somewhere else. That should concern us all.However, in this lawsuit, everything was turned a bit upside down. Part of it was that there did appear to be some outrageous behavior by Levandowski. Part of it was that, frankly, there are few companies out there disliked as much as Uber. It does seem that if it were almost any other company on the planet, many more people would have been rooting against Google as the big incumbent suing a smaller competitor. But, in this case, many many people seemed to be rooting for Google out of a general dislike of Uber itself.My own fear was that this general idea of "Uber = bad" combined with "Levandowski doing sketchy things" could lead to a bad ruling which would then be used to limit employee mobility in much more sympathetic settings. Thankfully, that seems unlikely to happen. As Sarah Jeong (who's coverage of this case was absolutely worth following) noted, despite all the rhetoric, it wasn't at all clear that Waymo proved its case. Lots of people wanted Google/Waymo to win for emotional reasons, but the legal evidence wasn't clearly there.And now the case is over. As the trial was set to continue Friday morning, it was announced that the two parties had reached a settlement, in which Uber basically hands over a small chunk of equity to Waymo (less than Waymo first tried to get, but still significant). As Jeong notes in another article, both sides had ample reasons to settle -- but the best reason of all to settle is so that they can focus on just competing in the market, rather than the courtroom and in not setting bad and dangerous precedent concerning employee mobility in an industry where that's vital.
|
|
by Karl Bode on (#3FPPW)
You might recall that just a few years ago, HBO had to be dragged kicking and screaming into the modern era. For years the company refused to offer a standalone streaming TV service, worried that it would jeopardize the company's cozy promotional relationship with existing cable providers (who often all but give away the channel in promotions). As recently as 2013 Time Warner CEO Jeff Bewkes was claiming that such an offering would make "no economic sense."Why? Bewkes was worried that offering a standalone option would upset cable partners. At the time, those partners were already offering an HBO streaming app named HBO Go, but only if you signed up for traditional TV. This was art of the industry's walled garden "TV Everywhere" initiative, a misguided attempt at stopping cord cutters by only giving them innovative streaming services -- if they signed up for bloated, traditional television bundles. Bewkes was clearly worried at the time that being too damn innovative would upset industry executives and skew the company's balance sheets:
|
|
by Tim Cushing on (#3FPAT)
For years, Manhattan DA Cy Vance has been warning us about the coming criminal apocalypse spurred on by cellphone encryption. "Evil geniuses" Apple introduced default encryption in a move likely meant to satiate lawmakers hollering about phone theft and do-nothing tech companies. In return, DA Cy Vance (and consecutive FBI directors) turned on Apple, calling device encryption a criminal's best friend.Vance still makes annual pitches for law enforcement-friendly encryption -- something that means either backdoors or encryption so weak it can be cracked immediately. Both ideas would also be criminal-friendly, but Vance is fine with sacrificing personal security for law enforcement access. Frequently, these pitches are accompanied with piles of uncracked cellphones -- a gesture meant to wow journalists but ultimately indicative of nothing more than how much the NYPD can store in its evidence room. (How many are linked to active investigations? How many investigations continued to convictions without cellphone evidence? Were contempt charges ever considered to motivate cellphone owners into unlocking phones? So many questions. Absolutely zero answers.)Will Vance be changing his pitch in the near future? Will he want weakened encryption safeguarding the NYPD's new tools? I guess we'll wait and see. (h/t Robyn Greene)
|
|
by Leigh Beadon on (#3FN4J)
This week, our first place winner on the insightful side comes in response to the FCC's refusal to release certain records to a FOIA request. David noted that their reason — "to prevent harm to the agency" — was a big problem:
|
|
by Leigh Beadon on (#3FK92)
Five Years AgoThis week in 2013, the EU was taking a worryingly restrictive approach to trying to fix copyright licensing, France's Hadopi was trying to get the national library to use more DRM, and Japan was planning to seed P2P networks with fake files containing copyright warnings. The UK, on the other hand, rejected plans to create a new IP Czar, though a new copyright research center seeking to restore some balance to the overall debate was facing heavy opposition right out the gate. This was also the week that we wrote about the curious privacy claims about tweets from an investigative journalist named Teri Buhl, which quickly prompted a largely confused response and, soon afterwards, threats of a lawsuit.Ten Years AgoThis week in 2008, the recording industry was continuing its attempts to sue Baidu and floating fun ideas like building copyright filters into antivirus software, while we were taking a look at the morass of legacy royalty agreements holding back the industry's attempts at innovation. A Danish court told an ISP it had to block the Pirate Bay, leading the ISP to ask for clarification while it considered fighting back. And Microsoft was doing some scaremongering in Canada in pursuit of stronger copyright laws.Fifteen Years AgoThis week in 2003, Germany's patent office was seeking a copyright levy on all PCs, while the EU was mercifully pushing back on attempts to treat more infringement as criminal. One record label executive was telling the industry it had to embrace file sharing or die, but the company line was still the language of moral panic. Speaking of which, in an interview in the Harvard Political Review, Jack Valenti was asked about his infamous "Boston strangler" warning about VCRs — and proceeded to tell a bunch of lies to claim his warning was in fact apt.
|
|
by Mike Masnick on (#3FHYP)
It is something of an unfortunate Techdirt tradition that every time the Olympics rolls around, we are alerted to some more nonsense by the organizations that put on the event -- mainly the International Olympic Committee (IOC) -- going out of their way to be completely censorial in the most obnoxious ways possible. And, even worse, watching as various governments and organizations bend to the IOC's will on no legal basis at all. In the past, this has included the IOC's ridiculous insistence on extra trademark rights that are not based on any actual laws. But, in the age of social media it's gotten even worse. The Olympics and Twitter have a very questionable relationship as the company Twitter has been all too willing to censor content on behalf of the Olympics, while the Olympic committees, such as the USOC, continue to believe merely mentioning the Olympics is magically trademark infringement.So, it's only fitting that my first alert to the news that the Olympics are happening again was hearing how Washington Post reporter Ann Fifield, who covers North Korea for the paper, had her video of the unified Korean team taken off Twitter based on a bogus complaint by the IOC:
|
|
by Gus Rossi on (#3FHMD)
Online platforms have enabled an explosion of creativity — but the laws that make this possible are under attack in NAFTA negotiations. We recently launched EveryoneCreates.org to share the stories of artists and creators who have been empowered by the internet. This guest post from Public Knowledge's Gus Rossi explore's what's at stake.In the past few weeks, we at Public Knowledge have been talking with decision-makers on Capitol Hill about NAFTA. We wanted to educate ourselves on the negotiation process for this vital trade agreement, and fairly counsel lawmakers interested in its effects on consumer protection. And we discovered a thing or two in this process.It won’t surprise anyone that we don’t always agree with lobbyists for the big entertainment companies when it comes to creating a balanced copyright system for internet users. But some of the ideas these groups are advancing are widely misleading, brutally dishonest, and even dangerous to democracy. We wanted to share the two wildest ideas the entertainment industries are proposing in the new-NAFTA, so you can help us set the record straight before it’s too late:1) Safe harbors enable child pornography and human trafficking.Outside specialized circles, common wisdom is that “safe harbors†are free get-out-of-jail cards that internet intermediaries like Facebook can use to avoid all responsibility for anything that internet users say or do in their services. Leveraging this fallacy, entertainment industry lobbyists are arguing that safe harbors facilitate child pornography and human trafficking. Therefore, the argument follows, NAFTA should not promote safe harbors.This is highly misleading. Safe harbors are simply legal provisions that exempt internet intermediaries such as YouTube or Twitter, and broadband providers such as Comcast or AT&T, from liability for the infringing actions of their users under certain specific circumstances. Without safe harbors, internet intermediaries would be obligated to censor and control everything their users do on their platforms, as they would be directly liable for it. Everything from social media, to internet search engines, to comments section in newspapers, would be highly restricted without some limitations on intermediary liability.The Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act (CDA 230) establish the two most important limitations for online intermediaries in US law. According to the DMCA, internet access providers (such as Comcast, AT&T, and Verizon) are not liable for the alleged copyright infringement of users on their networks, so long as they maintain a policy of terminating repeat infringers. Content hosts (such as blogs, image-hosting sites, or social media platforms) on the other hand, have to remove material if the copyright holder sends a takedown notice of infringement.CDA 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.†Online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them directly responsible for what others say and do.The relevant safe harbor for the interests of the entertainment industries is the DMCA, not CDA 230. CDA 230 specifically excludes copyright from its umbrella. And DMCA is exclusively about copyright. It is incredible dishonest and shallow for these lobbyists to use the specter of child abuse to drum up support for their position on copyright in NAFTA. No one should try to obfuscate a complicated policy discussion by accusing their opponents of promoting child sex trafficking.2) Exceptions and limitations to copyright are unnecessary in trade agreements.According to none other than the World Intellectual Property Organization, exceptions and limitations to copyright -- such as fair use -- exist “[i]n order to maintain an appropriate balance between the interests of rights holders and users of protected works, [allowing] cases in which protected works may be used without the authorization of the rights holder and with or without payment of compensation.†Without exceptions and limitations, everything from using a news clip for political parody, to sharing a link to a news article in social media, to discussing or commenting on just about any work of art or scholarship -- all could constitute copyright infringement.Yet, the entertainment industries are arguing that exceptions and limitations are outdated and unnecessary in trade agreements. They say that copyright holders should be protected from piracy and unlawful use of their works, claiming that any exceptions and limitations are a barrier to the protection of American artists.This is also wildly inaccurate. American artists and creators remix, reuse, and draw inspiration from copyrighted works every single day. If our trade partners don’t adopt exceptions and limitations to copyright, then these creators could be subject to liability when exporting their work to foreign countries. Exceptions and limitations to copyright are necessary both in the US and elsewhere. Our copyright system simply wouldn’t work without them, especially in the digital age.Conclusion: We need to set the record straight.For its political and economic importance, NAFTA could be be the standard for future American-sponsored free trade agreements. But NAFTA could have dramatic and tangible domestic consequences if it undermines safe harbors and exceptions and limitations to copyright. In the next policy debate around copyright infringement or intermediaries liabilities, the entertainment industries will point to NAFTA as an example of the US Government’s stated policy and where the world is moving.Furthermore, these lobbyists will have already convinced many on Capitol Hill that safe harbors enable child abuse and that fair use is unnecessary. The entertainment industries knows how to walk through the corridors of power day after day -- they’ve been doing so for well over a century.It’s not too late to fight back, set the record straight, and defend a balanced approach to copyright and consumer protections in NAFTA. You can start by contacting your representative. But the clock is ticking. Join Public Knowledge in the fight to keep the internet open for everyone.Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »
|
|
by Colin Sullivan on (#3FHC4)
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. This is the last in the series for now. We hope you enjoyed them.Patreon occupies a unique space among platforms with user generated content, which allows us to take a less automated approach to content moderation. As a membership platform that makes it easy for creators to get paid directly by their fans, we both host user generated content and act as a payment provider for creators. As a result, Patreon must have a higher bar for moderation and removal. It is not just individual content that is at risk, but potentially the creator's entire source of income.Our goal is to have a moderation and removal process that feels like working with an enterprise SaaS provider that powers your business, instead of a distrustful content hosting platform. This is a challenge on a platform with no vetting before a creator is able to set up a page and with large number of active creators. To achieve this goal, we treat our creators like partners and work through a series of escalating steps as part of our moderation and removal process.Patreon's Moderation and Removal ProcessWe want to give creators on Patreon a kinder moderation experience, so our first step is to send them a personalized email to let them know that their content has violated our guidelines. This initial contact is primarily to establish a line of communication, educate the creator on guidelines that they may not have known about, and give them a sense of agency in dealing with our guidelines. The vast majority of the time this process results in a mutually beneficial outcome, as the creator wants to continue receiving their funding and we want to continue working with them. We sometimes even use this approach before a creator has violated our guidelines if we see them posting content or exhibiting behaviors that is likely to result in a violation. This early outreach helps to educate creators before it becomes a problem.When specific content poses an extreme risk, or when previous conversations fail to achieve the desired outcome, we then proceed to suspension. Our suspension state removes the page from public view and pauses all payment processing. It still allows the creator to log in to their page to make changes. The purpose of this feature is to give creators agency, because they can choose how to edit their pages to become compliant. We've heard from creators about how other moderation and removal processes are impersonal and inflexible. We want them to have the opposite experience when working with our team at Patreon. Creators are typically understanding of the requirement to change or remove specific content, but want to have control over how it is done and be part of the process. By disabling public access to the page we remove the risk the content poses to Patreon, and then allow the creators to control the moderation and removal process. We can be clear with creators what steps they need to take for the suspension to be lifted, but allow the creator to retain their agency.Sometimes we are forced to remove a page, cutting off funding of a creator. Typically this is reserved for the most egregious content risks or when we see repeated re-offense. Even in these situations, we provide a path forward for the creator by allowing them to create a new page. We give the creator a list of their patrons' emails and offer them the opportunity to start fresh. This gives creators the opportunity to create a page within our guidelines, but resets their page and their relationship with patrons.Permanent bans for individuals are the final possible step of this process, and the only bans we have issued so far have been for extreme situations where the creator's past behavior is a permanent risk, such as creators convicted of serious crimes.How Will it Work at Scale?Admittedly, Patreon has some unique advantages as a platform that allow us to spend much more time on our moderation and removal process than most platforms can on a per user basis.The first is that the value to the platform of each new user on a content hosting platform run by ads is lower compared to the value of each new Patreon creator with subscription payments. In fact the controversy of any individual creator is often a function of the amount of income they are making. If a creator isn't making much money on Patreon they represent a lower risk. It is often only when that creator's income becomes more significant that concerned individuals will report it and then we investigate to see whether it complies with our guidelines.The second is that Patreon isn't a discovery platform. Discovery platforms solve the problem of zero to fan, of introducing a creator's work to the world and getting fans as a result. Patreon solves the problem of fan to patron, of getting those fans engaged and willing to support a creator with direct-to-creator contributions, rather than generating user ad impressions that send a creator pennies from an ad-revenue share.This lack of focus on discovery means two things. First, we don't promote people landing on creator pages they don't already know about, massively de-risking the possibility that someone who is offended by any particular piece of content will be exposed to it. This means everyone landing on a Patreon page has generally already self selected to want to go there. Second, much of the actual content on Patreon lives behind a paywall, dramatically reducing the possibility of the content going viral, and again reinforcing the self selective nature of the people viewing that content on Patreon.These advantages mean we can continue to build and improve our moderation and removal process in a way that will scale without losing our human touch. We will always prioritize making sure creators can trust Patreon to run their creative business and have agency in the moderation and removal process.Colin Sullivan is Head of Legal for Patreon
|
|
by Zach Graves on (#3FH5J)
Most people don't understand the nuances ofartificial intelligence (AI), but at some level they comprehend that it'll bebig, transformative and cause disruptions across multiple sectors. And even if AIproliferation won't lead to a robot uprising, Americans are worried about how AI and automation willaffect their livelihoods.Recognizing this anxiety, our policymakershave increasingly turned their attention to the subject. In the 115th Congress,there have already been more mentions of “artificial intelligence†in proposed legislation and in the Congressional Record than ever before.While not everyone agrees on how we shouldapproach AI regulation, one approach that has gained considerable interest isaugmenting the federal government's expertise and capacity to tackle the issue.In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen.Maria Cantwell hasintroduced legislation setting up a new committee within the Department ofCommerce to study and report on the policy implications ofAI.This latter bill, the “FUTURE of Artificial Intelligence Act†(S.2217/H.4625), sets forth a bipartisan proposal thatseems to be gainingsome traction. Whilethe bill's sponsors should be commended for taking a moderate approach in theface of growing populist anxiety, it's not clear that the proposed advisorycommittee would be particularly effective at all it sets out to do.One problem with the bill is how it sets thedefinition of AI as a regulatory subject. For most of us, it's hard toarticulate precisely what we mean when we talk about AI. The term “AI†can describea sophisticated program like Apple's Siri, but it can also refer to Microsoft'sClippy, or pretty much any kind of computer software.It turns out that AI is a difficult thing to define, even for experts.Some even argue that it's a meaninglessbuzzword. While this is a fine debate to have in the academy, prematurelyenshrining a definition in a statute – as this bill does – is likely to be thebasis for future policy (indeed, another recent bill offers a totally different definition). Downthe road, this could lead to confusion and misapplication of AI regulations. Thisprovision also seems unnecessary, since the committee is empowered to changethe definition for its own use.The committee's stated goals are also overly-ambitious.In the course of a year and a half, it would set out to “study and assess†overa dozen different technical issues, from economic investment, to workerdisplacement, to privacy, to government use and adoption of AI (although,notably, not defense or cyber issues). These are all important issues. However,the expertise required to adequately deal with these subjects is likely beyondthe capabilities of 19 voting members of the committee, which includes onlyfive academics. While the committee could theoretically choose to focus on anarrower set of topics in its final report, this structure is fundamentally notgeared towards producing the sort of deep analysis that would advance thedebate.Instead of trying to address every AI-relatedpolicy issue with one entity, a better approach might be to build separate, specializedadvisory committees based in different agencies. For instance, the Departmentof Justice might have a committee on using AI for risk assessment, the GeneralServices Administration might have a committee on using AI to streamlinegovernment services and ITinfrastructure, and the Department of Labor might have a committee on worker displacementcaused by AI and automation or on using AI in employment decisions. While thisapproach risks some duplicative work, it would also be much more likely toproduce deep, focused analysis relevant to specific areas of oversight.Of course, even the best public advisorycommittees have limitations, including politicization, resource constraints andcompliance with the Federal Advisory Committee Act. However, notall advisory bodies have to be within (or funded by) government. Outsideresearch groups, policy forums and advisory committees exist within the privatesector and can operate beyond the limitations of government bureaucracy whilestill effectively informing policymakers. Particularly for those issues notdirectly tied to government use of AI, academic centers, philanthropies and other groupscould step in to fill this gap without any need for new public expenditures orenabling legislation.If Sen. Cantwell's advisory committee-focusedproposal lacks robustness, Sen. Schatz's call for creating a new “independent federalcommission†with a mission to “ensure that AI is adopted in the best interestsof the public†could go beyond the bounds of political possibility. To hiscredit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications,and in coordinating between different agencies. These are real issues thatwarrant thoughtful solutions. Nonetheless, the creation of a new agency for AIis likely to run into a great deal of pushback from industry groups and thepolitical right (like similar proposals in the past), making it a difficultproposal to move forward.Beyond creating a new commission or advisorycommittees, the challenge of federal expertise in AI could also besubstantially addressed by reviving Congress' Office of Technology Assessment(which I discuss in a recent paper withKevin Kosar). Reviving OTA has a number of advantages: OTA raneffectively for years and still exists in statute, it isn't a regulatory body,it is structurally bipartisan and it would have the capacity to produce deep-diveanalysis in a technology-neutral manner. Indeed, there's good reason tostrengthen the First Branch first, since Congress is ultimately responsible formaking the legal frameworks governing AI as well as overseeing governmentusage.Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions ofdollars in potential economic benefits at stake. Whilethe instincts to build expertise and understanding first make for a commendableapproach, policymakers will need to do it the right way – across multiplefacets of government – to successfully shape the future of AI without hinderingits transformative potential.
|
|
by Tim Cushing on (#3FH0G)
Another communications platform has published National Security Letters it has received from the FBI. Twilio -- a San Francisco-based cloud communications platform -- has published two NSLs freed from the confines of their accompanying gag orders.
|
|
by Daily Deal on (#3FH0H)
Sid Meier's Civilization needs little introduction, but the newest entry to the saga offers entirely new ways to engage with your world. The turn-based strategy franchise has sold over 35 million units worldwide since its creation, creating an enormous community of players attempting to build an empire to stand the test of time. Advance your civilization from the Stone Age to the Information Age by waging war, conducting diplomacy, advancing your culture, and going head to head with history's greatest leaders. There are five ways to achieve victory in Civilization VI. Which will you choose? Get started for $29.99.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Tim Cushing on (#3FGVG)
Officials at ICE are pitching a dangerous idea to an administration likely to give it some consideration. It wants a seat at the grown-up table where it can partake of unminimized intel directly.
|
|
by Karl Bode on (#3FG8H)
We've noted repeatedly how ESPN has personified the cable and broadcast industry's tone deafness to cord cutting and TV market evolution. The company not only spent years downplaying the trend as something only poor people do, it sued companies that attempted to offer consumers greater flexibility in how video content was consumed. ESPN execs clearly believed cord cutting was little more than a fad that would simply stop once Millennials started procreating, and ignored surveys showing how 56% of consumers would ditch ESPN in a heartbeat if it meant saving the $8 per month subscribers pay for the channel.As the data began to indicate the cord cutting trend was very real, insiders say ESPN was busy doubling down on bloated sports licensing deals and SportsCenter set redesigns. By the time ESPN had lost 10 million viewers in just a few years, the company was busy pretending they saw cord cutting coming all the while. ESPN subsequently decided the only solution was to fire hundreds of longstanding sports journalists and support personnel, but not the executives like John Skipper (since resigned) whose myopia made ESPN's problems that much worse.Fast forward to this week, when Disney CEO Bob Iger suggested that Disney and ESPN had finally seen the error of their ways, and would be launching a $5 per month streaming service sometime this year. Apparently, Iger and other ESPN/Disney brass have finally realized that paying some of the least-liked companies in America $130 per month for endless channels of crap has somehow lost its luster in the streaming video era:
|
|
by Tim Cushing on (#3FFWM)
Last spring, Mike Masnick covered a completely fake court order that was served to Google to make some unflattering information disappear. The court order targeted some posts by a critic of a local politician.Ken Haas, a member of the New Britain (CT) city commission got into an online argument with a few people. When things didn't go his way, Haas played a dubious trump card:
|
|
by Glyn Moody on (#3FF70)
Techdirt has been exploring the important questions raised by so-called "fake news" for some time. A new player in the field of news aggregation brings with it some novel issues. It's called TopBuzz, and it comes from the Chinese company Toutiao, whose rapid rise is placing it alongside the country's more familiar "BAT" Internet giants -- Baidu, Alibaba and Tencent. It's currently expanding its portfolio in the West: recently it bought the popular social video app Musical.ly for about $800 million:
|
|
by Timothy Geigner on (#3FEVV)
You will hopefully recall a post we did several years ago dealing with Blizzard's decision to shut down a fan-run "vanilla" World of Warcraft server that stripped the game's expansions out and let players play the game as it was originally released in 2004. As is so often the case in these kinds of disputes, we can at once stipulate that Blizzard was within its right to do this while still calling out whether it was the best decision it could make on the matter. The simple fact is that there were other avenues down which the company could travel other than threatening the fan-server into oblivion, such as working out a cheap licensing arrangement to make it official. The whole situation became all the more odd when you consider that Blizzard itself does not offer a competing experience with the fan-server, essentially ignoring what is clearly a desire within the fanbase for that kind of experience that Blizzard could monetize if it wanted. Instead, the fan-server shut itself down under the threat of a trademark lawsuit and Blizzard went on its merry way ignoring these customer desires.Fast forward to today, some two years later, and it's all happening again. Another fan-operated vanilla server, this one called Light's Hope, is under attack from Blizzard for all the same reasons.
|
|
by Sarah Roberts on (#3FEMR)
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we're publishing a bunch of these essays, including this one.After a difficult few weeks of media attention and criticism surrounding the discovery of a spate of disturbing, exploitative videos either aimed at or featuring young children, YouTube’s CEO Susan Wojcicki released a blog post on December 4, 2017 that described the platform’s recent efforts to combat the problem. Central to the proposed plan to remedy the issue of unwanted user-generated content, or UGC, on YouTube was Wojcicki’s announcement of the large-scale hiring of additional human content moderation workers to complement those already working for Google, bringing the total number of such reviewers in Google’s employ to over 10,000.Wojcicki also went on to refer to the platform’s development of its automated moderation mechanisms, artificial intelligence and machine learning as key to its plans to combat undesirable content, as it has in other cases when disturbing material was found on the platform in large amounts. Importantly, however, and as indicated by Wojcicki, the work of those thousands of human content moderators would go directly to building the artificial intelligence required to automate the moderation processes in the first place. In recent months, other major social media and UGC-reliant platforms have also offered up plans to hire new content moderators by the thousands in the wake of criticism around undesirable content proliferating within their products.These recent public responses that invoke large-scale content moderator hires suggest that social media firms who trade in UGC still need human workers to review content, now and into the foreseeable future, whether that activity is to the end of deleting objectionable material, training AI to do it, or some combination thereof. As most anyone familiar with the production cycle of social media content would likely stipulate, these commercial content moderators – paid, professional workers who screen at scale following internal norms and policies – perform critical functions of brand protection and shield platform and user alike from harm. CCM workers are critical to social media’s operations and yet, until very recently, have often gone unknown to the vast majority of the world’s social media users.The specifics of the policies and procedures that they follow have also been mostly inaccessible (beyond simplified public-facing community guidelines) and treated as trade secrets. There are several reasons for this, not the least of which being the very real concern on the part of platforms that having detailed information about what constitutes allowable, versus disallowed, content would give people intent on gaming the platform using its own rules ample ammunition to do so. There may be other reasons, however, including the fact that the existence of the problem of disturbing content circulating online is one that most mainstream platforms do not care to openly discuss due to the distasteful nature of the issue and the subsequent questions that could be asked about if, how, and by whom such material is addressed.CCM work is both rote and complex. It requires engagement with repetitive tasks that workers can come to perform in a routinized way, but often under challenging productivity metrics that require a high degree of both speed and accuracy in decision-making. The job calls on sophistication that cannot currently be fully matched by machines, and so a complex array of human cognitive functions (e.g., linguistic and cultural competencies; quick recognition and application of appropriate policies; recognition of symbols or latent meanings) is needed. A further challenge to CCM workers and the platforms that engage them is the fact that the job, by its very nature, exposes workers to potentially disturbing imagery, videos and text from which mainstream platforms wish to shield their user and remove from circulation, the latter which may even be a legal dictate in many cases.In other words, in order to make its platforms safe for users and advertisers, platforms must expose its CCM workers to the content that it considers unsuitable for anyone else. It is a paradox that illustrates a primary motivation behind the automation of moderation practices: under the status quo, CCM workers put their own sensitivity, skills and psyches on the line to catch, view it and delete material that may include images of pornography, violence against children, violence against animals, child sexual exploitation, gratuitously graphic or vulgar material, hate speech or imagery and so on. It is work that can lead to emotional difficulty for those on the job, even long after some have moved on.To this end, industry has responded in a variety of ways. Some workplaces have offered on-site counseling services to employees. The availability of such counseling is important, particularly when CCM workers are employed as contractors who may lack health insurance plans or might find mental health services cost-prohibitive. Challenges are present, however, when cultural barriers or concerns over privacy impede workers from taking full advantage of such services.When it comes to CCM worker wellness, firms have been largely self-guiding. Several major industry leaders have come together to form the self-funded “Technology Coalition,†whose major project relates to fighting child sexual exploitation online. In addition to this key work, they have produced the “Employee Resilience Guidebook,†now in a second version, intended to support workers who are exposed to child sexual exploitation material. It includes information on mandatory reporting and legal obligations (mostly US-focused) around the encountering of said material, but also provides important information about how to support employees who can be reasonably expected to contend emotionally with the impact of their exposure. Key to the recommendations is beginning the process of building a resilient employee at the point of hiring. It also draws heavily from information from the National Center for Missing and Exploited Children (NCMEC), whose expertise in this area is built upon years of working with and supporting law enforcement personnel and their own staff.The Employee Resilience Guidebook is a start toward the formation of industry-wide best practices, but in its current implementation it focuses narrowly on the specifics of child sexual exploitation material and is not intended to serve the broader needs of a generalist CCM worker and the range of material for which he or she may need support. Unlike members of law enforcement, who can call on their own professional identities and social capital for much-needed support, moderators often lack this layer of social structure and indeed are often unable to discuss the nature of their work due to non-disclosure agreements (NDAs) and stigma around the work they do. The relative geographic diffusion and industrial stratification of CCM work can also make it difficult for workers to find community with each other, outside of their immediate local teams, and no contracting or subcontracting firm is represented in the current makeup of the Technology Coalition, yet many CCM workers are employed through these channels. Finally, I know of no long-term publicly-available study that has established psychological baselines for or tracked CCM worker wellness over their period of employment or beyond. The Technology Coalition’s guidebook references information from a 2006 study on secondary trauma done on NCMEC staff. To be sure, the study contains important results that can be instructive in this case, but it cannot substitute for a psychological study done specifically on the CCM worker population in the current moment. Ethical and access concerns make such an endeavor complicated. Yet, as the ongoing 2016 Washington state lawsuit filed on behalf of two Microsoft content moderators suggests, there may be liability for firms and platforms that do not take sufficient measures to shield their CCM workers from damaging content whenever possible and to offer them adequate psychological support when it is not.Although the factors described above currently serve as challenges to effectively serving the needs of CCM workers in the arena of their health and well-being, they are also clear indicators of where opportunities lie for industry and potential partners (e.g., academics; health care professionals; labor advocates) to improve upon the efficacy, well-being and work-life conditions of those who undertake the difficult and critical role of cleaning up the internet on behalf of us all. Opening the dialog about both challenges and opportunities is a key step toward this goal.
|
|
by Mike Godwin on (#3FEDX)
Earlier today we posted Mike Masnick's post about the passing of John Perry Barlow, but Mike Godwin, who was EFF's first lawyer among other things, sent over his memories of Barlow as well, which are well worth reading.It’s the nature of having known John Perry Barlow, and having been his friend, that you can’t write about what it means to have lost him Wednesday morning (he died in his sleep at the too-young age of 70) without writing about how he changed your life. So, I ask your forgiveness in advance if I say too much about myself here on the way to saying more about John.I can and will testify that I had a life before I met John Perry Barlow. At the beginning of 1990 I was finishing up law school in Texas (only one more semester and then the bar exam!) and was beginning to think about my professional future (how about being a prosecutor in Houston?) and my personal future (should my long-term girlfriend and I get married?).That was the glide path I was on before Grateful Dead lyricist John Perry Barlow, together with software entrepreneur Mitch Kapor and Sun Microsystems pioneering programmer John Gilmore, decided to start what would shortly be known as the Electronic Frontier Foundation (EFF). EFF disrupted all my inertial, half-formed plans and changed my life forever. (I didn’t, for example, become a prosecutor.) And John Perry Barlow was the red-hot beating heart of EFF.I’d been feeling tremors in the Force before EFF even had a name, though. For reasons I can’t quite explain, I’d found ways to persuade people, including my university, to give me access to internet-capable accounts and services so that I could see the rest of the digital world as it was then represented in Usenet. I’d been a BBS hobbyist in the 1980s, but I thought I’d exhausted the BBS scene in Austin and wanted to know more of the larger digital world. Thanks to Usenet, over the Christmas break before my last semester of law school I’d become friends online with Clifford Stoll, whose book “The Cuckoo’s Egg†detailed how he had detected and helped thwart a foreign plot to hack into U.S. academic and research computers. Cliff had included his email address in the book and, as we so often did in those days, I just fired off a note to him and got to know him.At about the same time, at my girlfriend’s urging, we spent a couple of days in San Francisco at MacWorld Expo, where I first met Mitch Kapor, who wore a Hawaiian shirt and demo’d what became for years my favorite Mac application, On Location. Other things were happening as well, and my computer-hobbyist nature— never too far in the background during my law-student years—kept me attuned to what seemed to be happening in the larger world which, as I would have framed it back then, seemed to reflect a convergence of my interests in constitutional law and cyberspace.Just a month or two later, I came across the March 1990 issue of Harper’s Magazine, and there on the cover was this colloquy edited by Jack Hitt and Paul Tough titled, “Is Computer Hacking a Crime? (Harper’s theoretically makes a download of that old article available, but the links don’t work. You can find a transcribed version here). I wasn’t a subscriber, but I knew I had to read this. And there was Barlow – whose name I didn’t recognize – along with luminaries like Stewart Brand (former Merry Prankster, later the founder of The Whole Earth Catalog and The Whole Earth Review), Richard Stallman (founder and chief visionary of the Free Software movement that gave birth to the Linux operating system) and my new friend Cliff Stoll. They all had lots of opinions about computer hacking, but the participant whose words spoke most clearly to me was Barlow:
|
|
by Tim Cushing on (#3FE5N)
The Nunes Memo, capitalized to give it far more gravitas that it actually possesses, was released late last week to mixed reviews. Nunes had built it up to be a mind-blowing damnation of a politically corrupt Federal Bureau of Investigation, more interested in destroying Trump than performing its appointed duties. The memo showed the FBI had relied on questionable evidence from the Steele dossier while securing FISA warrants to surveill former Trump adviser Carter Page. This memo was composed by the House intelligence oversight head -- one who had rarely expressed concern about domestic surveillance prior to investigations of Trump officials.The memo showed the basis for the warrants may have been thin, but it didn't show it was nonexistent. In fact, the underlying warrants actually did inform the FISA court about the political background of Christopher Steele and his dossier. Nunes didn't know this because Nunes hadn't actually read the warrants. When he was finally apprised of this contradiction, he claimed the FBI disclosure didn't count because the disclosure was contained in a footnote.The memo's release has had some serious side effects, however. But it will be Congressional oversight taking the damage, rather than the FBI. The memo's release showed the dumping of sensitive, classified info could be motivated by political whims, rather than as the result of a thoughtful, deliberative process. It showed oversight committee members were willing to jeopardize law enforcement sources and methods to score political points -- ironically the same claim Nunes was making about the FBI's motivations.The damage will also be felt -- indirectly -- by the American public. Intelligence oversight is supposed to protect Americans from surveillance abuses. With this move, Nunes has destroyed its credibility, as Julian Sanchez points out.
|
|
by Daily Deal on (#3FE3C)
SitePoint's $19 Ultimate Web Development eBook and Course Bundle will show you how to start your journey as a front-end web developer, giving you access to 7 best-selling ebooks and more than 21 hours of instructional video. You will learn about popular languages and frameworks like HTML5, CSS3, JavaScript, and Angular 2. You will be have your first websites up and running in no time.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3FE03)
I was in a meeting yesterday, when the person I was meeting with mentioned that John Perry Barlow had died. While he had been sick for a while, and there had been warnings that the end might be near, it's still somewhat devastating to hear that he is gone. I had the pleasure of interacting with him both in person and online multiple times over the years, and each time was a joy. He was always, insightful, thoughtful and deeply empathetic.I can't remember for sure, but I believe the last time I saw him in person was a few years back at a conference (I don't even recall what conference), where he was on a panel that had no moderator, and literally seconds before the panel was to begin, I was asked to moderate the panel with zero preparation. Of course, it was easy to get Barlow to talk, and to make it interesting, even without preparation. But that day the Grateful Dead's Bob Weir (for whom Barlow wrote many songs -- after meeting as roommates at boarding school) was in the audience -- and while the two were close, they disagreed on issues related to copyright, leading to a public debate between the two (even though Weir was not on the panel). It was fascinating to observe the discussion, in part because of the way in which Barlow approached it. Despite disagreeing strongly with Weir, the discussion was respectful, detailed and consistently insightful.Lots of people are, quite understandably, pointing to Barlow's famous Declaration of the Independence of Cyberspace (which was published 22 years ago today). Barlow later admitted that he dashed most of that off in a bar during the World Economic Forum, without much thought. And that's why I'm going to separately suggest two other things by Barlow to read as well. The first was his Wired piece, The Economy of Ideas from 1994, the second year of Wired's existence, and where Barlow's wisdom was found in every issue. Despite being written almost a quarter of a century ago, The Economy of Ideas is still fresh and relevant today. It is more thoughtful and detailed than his later "Declaration" and, if anything, I would imagine that Barlow was annoyed that the piece is still so relevant today. He'd think we should be way beyond the points he was making in 1994, but we are not.The other piece is more recent I've seen a few people pointing to is his Principles of Adult Behavior, which are a list of 25 rules to live by -- rules that we should be reminded of constantly. Rules that many of us (and I'm putting myself first on this list) fail to live up to all too frequently. Update I stupidly assumed that was a more recent writing by Barlow, but as noted in the comments (thanks!) it's actually from 1977 when Barlow turned 30.Cindy Cohn, who is now the executive director of EFF, which Barlow co-founded, mentions in her writeup how unfair it is that Barlow (and, specifically his Declaration) are often held up as the kind of prototype for the "techno-utopian" vision of the world that has become so frequently mocked today. Yet, as Cohn points out, that's not at all how Barlow truly viewed the world. He saw the possibilities of that utopia, while recognizing the potential realities of something far less good. The utopianism that Barlow presented to the world was not -- as many assume -- him claiming these things were a sort of manifest destiny, but rather by presenting such a utopia, we might all strive and push and fight to actually achieve it.
|
|
by Karl Bode on (#3FE04)
You might recall that right before the FCC voted to kill net neutrality at Verizon's behest, the agency thought it would be a hoot to joke about the agency's "collusion" with Verizon at a telecom industry gala. The lame joke was a tone-deaf attempt to mock very legitimate concerns that Pai, a former Verizon regulatory lawyer, is far too close to the industry he's supposed to be regulating. The FCC even went so far as to include a little video featuring Verizon executives, who chortled about their plans to install Pai as a "puppet" leader at the agency. Hilarious.While the audience of policy wonks and lobbyists giggled, the whole thing was tone deaf and idiotic from stem to stern. Especially given the fact that Pai's policies have been nothing short of a Verizon wish list, whether that involves protecting Verizon's monopoly over business data services (BDS), or the efforts to undermine any attempts to hold Verizon accountable for repeated privacy violations. Much like the other lame video Pai circulated at the time to make light of consumer outrage, it only served to highlight how viciously out of touch this FCC is with the public it's supposed to be looking out for.Gizmodo recently filed a FOIA request to obtain any communications between the FCC and Verizon regarding the creation of the video, arguing the records were well within the public interest given concerns over Pai's cozy relationship with the companies he's supposed to be holding accountable. But Gizmodo says the FCC refused the request under Exemption 5 of the FOIA (Deliberative Process Privilege). While the request revealed around a dozen pages of e-mails between the FCC and Verizon, the FCC refuses to release them, arguing they could harm the ability of the agency to do its job (read: kiss Verizon's ass):
|
|
by Tim Cushing on (#3FE05)
The CIA is spectacularly terrible at responding to FOIA requests. It's so bad it's highly possible the perceived ineptness is deliberate. The CIA simply does not want to release documents. If it can't find enough FOIA exemptions to throw at the requester, it gets creative.A FOIA request for emails pertaining to the repeated and extended downtime suffered by the (irony!) CIA's FOIA request portal was met with demands for more specifics from the requester. The CIA wanted things the requester would only know after receiving the emails he requested, like senders, recipients, and email subject lines.The CIA sat on another records request for six years before sending a letter to the requester telling him the request would be closed if he did not respond. To be fair, the agency had provided him a response of sorts five years earlier: a copy of his own FOIA request, claiming it was the only document the agency could locate containing the phrase "records system."In yet another example of CIA deviousness, the agency told a requester the documents requested would take 28 years and over $100,000 to compile. Then it went even further. During the resulting FOIA lawsuit, the DOJ claimed the job was simply too impossible to undertake. Less than 2 months after MuckRock's successful lawsuit, the entire database went live at the CIA's website -- more than 27 years ahead of schedule.This is the CIA's antipathy towards the FOIA process on display. It takes a lawsuit to get it to produce documents. And what we have here is more CIA recalcitrance being undercut by an FOIA lawsuit.Journalist Adam Johnson sued the agency early last year for its refusal to produce correspondence between the CIA's Office of Public Affairs and prominent journalists. Johnson did receive copies of these emails, but the CIA redacted the emails they had sent to journalists. (The journalists' response were left unredacted.) Since the emails obviously weren't redacted when they were sent to journalists, Johnson challenged the redactions in court.The government argued it had a right to disclose classified information to journalists. And it certainly can. The CIA can waive classification if it so desires. But what it can't do is claim it has never released this classified info to the public -- not if it's handing it out to journalists.Daniel Novak is representing the journalist in his FOIA lawsuit. And he reports the judge is no more impressed by the CIA's arguments than his client is. The decision [PDF] is redacted but some very nice bench slaps have been left untouched... like this one, which sums up the ridiculousness of the CIA's arguments.
|
|
by Timothy Geigner on (#3FE06)
For the past few years, we have detailed several trademark actions brought by Moosehead Breweries Limited, the iconic Canadian brewery that makes Moosehead beer, against pretty much every other alcohol-related business that dares to use the word "moose" or any moose images. This recent trend has revealed that Moosehead is of the opinion that only it can utilize the notorious animal symbol of both Canada and the northern United States. Without any seeming care for whether actual confusion might exist in the marketplace, these actions by Moosehead have instead smacked of pure protectionism over a common word and any and all images of a common animal.One of those actions included a suit against Hop 'N Moose Brewing, a small microbrewery out of Vermont. The filing in that case was notable in that it actually alleged detailed examples of trade dress infractions, while the images of the trade dress included in the filing appeared to be fairly distinct. Absent, of course, was any evidence of actual confusion in the marketplace. It appeared for all the world that Moosehead's legal team seemed to take past criticism of its trademark protectionism as a critique of the word and image count in its filings and simply decided to up the volume on both ends. Since late last year, despite having done all of this legal literary work to support the suit, little if anything had been litigated after the initial filing.And now it seems this whole thing will suddenly go away. Without any real explanation from either party, Moosehead has dropped its suit entirely.
|
|
by Timothy Geigner on (#3FE07)
With the constant drumbeat of the evils of copyright infringement and internet piracy being issued from those leading the movie industry, you might have been under the impression everyone within the industry held the same beliefs. Between the cries of lost profits, the constant calls for the censorship of websites, and even the requests to roll back safe harbor protections that have helped foster what must be considered a far larger audience for the industry, perhaps you pictured the rank and file of the movie business as white-clad monk-like figures that served as paragons of copyright virtue.Yet that's often not the case. While many artists, actors, and directors do indeed toe the industry line on matters of piracy, you will occasionally get glimpses of what has to be considered normalcy in how people engage with copyright issues among members of the industry. We should keep in mind our argument that essentially everyone will infringe on intellectual property at some point, often times without knowing or intending it, because engaging in said behavior just seems to make sense. During a radio interview Taiki Waititi did to promote Thor: Ragnarok, which he directed, he admitted to doing it himself.
|
|
by Paul Sieminski and Holly Hogan on (#3FE08)
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we're publishing a bunch of these essays, including this one.WordPress.com is one of the most popular publishing platforms online. We host sites for bloggers, photographers, small businesses, political dissidents, and large companies. With more than 70 million websites hosted on our service, we unsurprisingly receive complaints about all types of content. Our terms of service define the categories of content that we don't allow on wordpress.com.We try to be as objective as possible in defining the categories of content that we do not allow, as well as in our determinations about what types of content fall into, or do not fall into, each category. For most types of disputed content, we have the competency to make a judgment call about whether it violates our terms of service.One notable and troublesome exception is content that is allegedly untrue or defamatory. Our terms prohibit defamatory content, but it's very difficult if not impossible for us, as a neutral, passive host, to determine the truth or falsity of a piece of content hosted on our service. Our services are geared towards the posting of longer form content and we often receive defamation complaints aimed at apparently well-researched, professionally written blog posts or pieces of journalism.Defamation complaints put us in the awkward position of making a decision about whether the contents of a website are true or false. Moreover, in jurisdictions outside of the United States, these complaints put us on the hook for legal liability and damages if we don't take the content down after receiving an allegation that it is not true.Making online hosts and other intermediaries like WordPress.com liable for the allegedly defamatory content posted by users is often criticized for burdening hosts and stifling innovation. But intermediary liability isn't just bad for online hosts. It's also terrible for online speech. The looming possibility of writing a large check incentivizes hosts like Automattic to do one thing when we first receive a complaint about content: Remove it. That decision may legally protect the host, but it doesn't protect users or their online speech.The Trouble with "Notice and Takedown"Taken at face value, the notice-and-takedown approach might seem to be a reasonable way to manage intermediary liability. A host isn't liable absent a complaint, and after receiving one, a host can decide what to do about the content.Internet hosts like Automattic, however, are in no position to judge disputes over the truth of content that we host. Setting aside the marginal number of cases in which it is obvious that content is not defamatory—say, because it expresses an opinion—hosts are not at all equipped to determine whether content is (or is not) true. We can't know whether the subject of a blog post sexually assaulted a woman with whom he worked, if a company employs child laborers, or if a professor's study on global warming is tainted by her funding sources. A host does not have subpoena power to collect evidence. It does not call witnesses to testify and evaluate their credibility. And a host is not a judge or jury. This reality is at odds with laws imputing knowledge that content is defamatory (and liability) merely because a host receives a complaint that content is defamatory and doesn't remove it right away.Nevertheless, the prospect of intermediary liability encourages hosts to make a judgment anyway, by accepting a complaint at face value and removing the disputed content without any vetting by a court. This process, unfortunately, encourages and rewards abuse. Someone who does not like a particular point of view, or who wants to silence legitimate criticism, understands that he or she has decent odds of silencing that speech by lodging a complaint with the website's host, who often removes the content in hopes of avoiding liability. That strategy is much faster than having the allegations tried in a court, and as a bonus, the complainant won't face the tough questions—Did he assault a co-worker? Did she know that the miners were children? Did he fib his research?The potential for abuse is not theoretical. We regularly see dubious complaints about supposedly defamatory material at WordPress.com. Here is a sampling:
|
|
by Karl Bode on (#3FBCT)
The Trump FCC is currently in the process of trying to eliminate all meaningful oversight of some of the least competitive companies in America. Not only are broadband providers and the Trump administration trying to gut FTC and FCC oversight of companies like Comcast, they're also trying to ban states from protecting net neutrality or broadband consumer privacy at ISP lobbyist behest. This is all based on the belief that letting Comcast run amok somehow magically forges telecom Utopia. It's the kind of thinking that created Comcast and the market's problems in the first place.And while the Trump FCC is trying to ban states from protecting consumers in the wake of federal apathy (you know, states rights and all that), the individual states don't appear to be listening. Numerous states are pushing new legislation that effectively codifies the FCC's 2015 net neutrality rules on the state level, efforts that will be contested in the courts over the next few years. ISPs have been quick to complain about the threat of multiple, discordant and shitty state laws, ignoring the fact that they created this problem by lobbying to kill reasonable (and popular) federal protections.Other states, like Montana and New York have gotten more creative, signing executive orders that ban ISPs from winning state contracts if they violate net neutrality. Montana Governor Steve Bullock went so far as to suggest that other states use his order as a template, something New Jersey appears to have taken him up on. The state this week issued its own executive order (pdf) protecting net neutrality, modifying the state procurement process to prohibit state contracts with ISPs that routinely engage in anti-competitive blocking, throttling, or paid prioritization.In a press release, state leaders say the new rules will take effect in July:
|
|
by Glyn Moody on (#3FB71)
We've written many articles about the thin-skinned Turkish president, Recep Tayyip ErdoÄŸan, and his massive crackdown on opponents, real or imagined, following the failed coup attempt in 2016. Boing Boing points us to a disturbing report on the Canadian CBC News site revealing how thousands of innocent citizens have ended up in prison because they were falsely linked with the encrypted messaging app Bylock:
|
|
by Daily Deal on (#3FB4A)
It's one thing to be putting money aside in a 401(k) account or investing it in the stock market — but nobody's relationship with their money is the same as anyone else's relationship with theirs. PocketSmith recognizes that, which is why it designed a comprehensive set of features to give you absolute control over your money. You can see all your bank, credit card and loan accounts in one place, keep it all automatically updated, and organize your transactions as granularly as you like. Beyond tracking the past and present, however, PocketSmith is also forecasting tool. You can see how your savings will reward you by revealing your projected daily balances up to 10 years in the future, allowing you to get a better financial picture. The 1 year Premium (10 accounts / 10 years projection) subscription is on sale for $49.95 and the 1 year Super (30 accounts / 30 years projection) subscription is on sale for $69.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3FAVA)
Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »One theme that we've covered on Techdirt since its earliest days is the power of the internet as an open platform for just about anyone to create and communicate. Simultaneously, one of our greatest fears has been how certain forces -- often those disrupted by the internet -- have pushed over and over again to restrict and contain the internet, and turn it into something more like a broadcast platform controlled by gatekeepers, where only the chosen few can use it to create and share. This is one of the reasons we've been so adamant over the years that in so many policy fights, "Silicon Valley v. Content" is a false narrative. It's almost never true -- because the two go hand in hand. The internet has made it so that everyone can be a creator. Internet platforms have made it so that anyone can create almost any kind of content they want, they can promote that content, they can distribute it, they can build a fan base, and they can even make money. That's in huge contrast to the old legacy way of needing a giant gatekeeper -- a record label, a movie studio, or a book publisher -- to let you into the exclusive club.And yet, those legacy players continue to push to make the internet into more of a broadcast medium -- to restrict that competition, to limit the supply of creators and to push things back through their gates under their control. For example, just recently, the legacy recording and movie industries have been putting pressure on the Trump administration to undermine the internet and fair use in NAFTA negotiations. And, much of their positioning is that the internet is somehow "harming" artists, and needs to be put into check.This is a false narrative. The internet has enabled so many more creators and artists than it has hurt. And to help make that point, today we're launching a new site, EveryoneCreates.org which features stories and quotes from a variety of different creators -- including bestselling authors, famous musicians, filmmakers, photographers and poets -- all discussing how important an open internet has been to building their careers and creating their art. On that same page, you can submit your own stories about how the internet has helped you create, and why it's important that we don't restrict it. Please add your own stories, and share the site with others too!The myth that this is "internet companies v. creators" needs to be put to rest. Thanks to the internet, everyone creates. And let's keep it that way.Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »
|
|
by Karl Bode on (#3FADP)
For years the FCC has been caught in a vicious cycle. Under the Communications Act, the FCC is required to issue annual reports on the state of U.S. broadband and competition, taking action if services aren't being deployed in a "reasonable and timely" basis. When under the grip of regulatory capture and revolving door regulators, these reports tends to be artificially rosy, downplaying or ignoring the lack of competition that should be obvious to anybody familiar with Comcast. These folks' denial of the sector's competition shortcomings often teeters toward the comical and is usually hard to miss.When the agency has more independently-minded leadership (which admittedly doesn't happen often), the report tends to accurately show how the majority of consumers lack real options and quality broadband. That was the case under former FCC boss Tom Wheeler, whose agency not only raised the definition of broadband to 25 Mbps (which greatly angered the industry), but actually went out of its way to highlight the fact that two-thirds of American homes lack access to FCC-defined speeds of 25 Mbps from more than one ISP (aka a monopoly).Unsurprisingly, the Trump FCC is now taking things back in the rose-colored glasses direction. The agency's latest Broadband Deployment Report (pdf) proudly declares that United States broadband is now, quite magically, being deployed in a "reasonable and timely basis." An accompanying press release (pdf) similarly tries to claim that things are only getting better, thanks in large part to Ajit Pai's historically-unpopular attack on net neutrality:
|
|
by Tim Cushing on (#3FA2G)
Anything you do can be suspicious. Just ask our guardians of public safety. People interacting with law enforcement can't be too nervous. Or too calm. Or stare straight ahead. Or directly at officers. When traveling, travelers need to ensure they're not the first person off the plane. Or the last. Or in the middle. When driving, people can't drive too carefully. Or too carelessly. Traveling on interstate highways is right out, considering those are used by drug traffickers. Traveling along back roads probably just looks like avoiding more heavily-patrolled interstates, thus suspicious.Having too much trash in your car might get you labelled a drug trafficker -- someone making a long haul between supply and destination cities. Conversely, a car that's too clean looks like a "trap" car -- a vehicle carefully kept in top condition to avoid raising law enforcement's suspicion. Too clean is just as suspicious as too dirty. Air fresheners, a common fixture in vehicles, are also suspicious. Having too many of them is taken as an attempt to cover the odor of drugs. There's no specific number that triggers suspicion. It's all left up to the officer on the scene.So, avoiding rousing suspicion is impossible. Fortunately, courts can push back against law enforcement assertions about suspicious behavior. Some have pushed back more forcibly than others. Thanks to another court pushback, we have two new items to add to the list of suspicious indicators. From the Texas Appeals Court decision [PDF]:
|
|
by Timothy Geigner on (#3F9BS)
In the middle of summer last year, we discussed a somewhat strange trademark fight between BrewDog, a Scottish Brewery that has been featured in our pages for less than stellar reasons, and the Elvis Presley Estate. At issue was BrewDog's attempt to trademark the name of one of its beers, a grapefruit IPA called "Elvis Juice." With no other explanation beyond essentially claiming that any use of Elvis everywhere will only be associated in the public's mind as being affiliated by the 1950s rock legend, the Estate opposed the trademark application. Initially, the UK Intellectual Property Office sided with the Estate, despite the owners of BrewDog both pointing out that they were simply using a common first name and that they were actually taking the legal course of changing their first names to Elvis to prove their point. Not to mention that the trade dress for the beer has absolutely nothing to do with Elvis Presley. We wondered, and hoped, at the time if BrewDog would appeal the decision.Well, it did, and it has won, which means Elvis Juice is free to exist and the order that BrewDog pay the Elvis Estate costs for its opposition be vacated.
|
|
by Mike Masnick on (#3F8ZN)
Back in December, we reported on an effort underway in Australia to criminalize both whistleblowers and journalists who publish classified documents with up to 20 years in prison. 20 years, by the way, is also the amount of time that Cabinet documents are supposed to be kept classified in Australia. But just recently Australia's ABC news suddenly started breaking a bunch of news that appeared to come from access to Cabinet documents that were still supposed to be classified. This included stories around ending welfare benefits for anyone under 30 years old as well as delaying background checks on refugees. Some explosive stuff.On Wednsday, ABC finally revealed where all this stuff came from. It wasn't an Australian Ed Snowden. It was... government incompetence. Apparently, someone bought an old filing cabinet from a store that sells second-hand government office furniture. The cabinet had no key, so he drilled the lock and... found a ton of Cabinet documents in an actual cabinet.So... if that law were to go through in Australia... would that mean the government employee who didn't check the filing cabinet would get 20 years in jail? Or the store that sold out? Or the guy that drilled it? Or do all of them get 20 years? Why don't we just support whistleblowers and the press for reporting on important news that the public should know about?
|
|
by Leigh Beadon on (#3F8Q7)
When it comes to many of the legislative issues of interest to us here at Techdirt, we've always been able to count on at least one voice of reason amidst the congressional chaos: Representative Zoe Lofgren from California. In addition to playing a critical role in the fight against SOPA, she continues to be a voice of reason against bad copyright policy, expansive government surveillance, and the broken CFAA, among many other things. This week, she joins Mike on the podcast for a wide-ranging discussion about these topics and more.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
|
by Tarleton Gillespie on (#3F8GK)
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms' own definition of success includes failing users on a regular basis.The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.~~~All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this—nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users—and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to "join a thriving community" or "broadcast yourself," to Mark Zuckerberg's promise to make Facebook "the social infrastructure to give people the power to build a global community that works for all of us."Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the "right" feed for each user, the "right" social exchanges, the "right" kind of community. ("Right" here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this "customer service" mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible—even if they never see it. Whether I ever saw, clicked on, or ‘liked' a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: "The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for." What makes something of concern to the public is the potential need for its inhibition.So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility—not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.~~~If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.
|
|
by Mike Masnick on (#3F8BH)
We've been writing about the saga of Lauri Love for almost four years now. If you don't recall, he's the British student who was accused of hacking into various US government systems, and who has been fighting a battle against being extradited to the US for all these years. For those of you old timers, the situation was quite similar to the story of Gary McKinnon, another UK citizen accused of hacking into US government computers, and who fought extradition for years. In McKinnon's case, he lost his court appeals, but the extradition was eventually blocked by the UK's Home Secretary... Theresa May.In the Lauri Love case, the situation went somewhat differently. A court said Love could be extradited and current Home Secretary Amber Rudd was happy to go along with it. But, somewhat surprisingly, an appeals court has overruled the lower court and said Love should not be extradited:
|