This week, Stephen T. Stone took both of the top spots for insightful with responses to Tim Cushing's first post reacting to the protests in Minneapolis and across the country. In first place, it's his opening take on the subject:
Five Years AgoThis week in 2015, one court was apparently forgetting the First Amendment exists while ordering a newspaper to delete an article, while the Supreme Court was punting on an important 1A question. The USA Freedom Act was marching forward with Mitch McConnell trying to destroy it with amendments with all failed. The House passed an amendment on another bill, in order to block funding for undermining encryption, while on top FBI official was claiming that preventing encryption should be the first priority of tech companies.Ten Years AgoThis week in 2010, a Michigan politician was trying to regulate journalists and choose which ones are trustworthy, while the FTC was trying to "save journalism" but only in the form of old newspapers, and a Senator in France was trying to outlaw anonymous blogging. Copyright trolls were teaming up and getting cooperation from Verizon.Meanwhile, while on court was giving border patrol permission to take people's laptops, we took a look at how cops and courts abuse wiretapping laws to arrest people for filming the police.Fifteen Years AgoThis week in 2005, we learned about the startling lengths copyright defenders would go to, such as the recording industry operating private round-the-clock surveillance of the owner of Kazaa and the MPAA pitching in to help fund surveillance cameras in downtown Los Angeles. We were also seeing some early battles and tactics around manipulating Google's memory of the past, and the emergence of citizen journalism happening in the comments sections of newspaper websites.
Of all the mediums where intellectual property makes the least amount of sense, actual food and drink must certainly be among the most absurd. Not the trade dress of food packaging, mind you. I'm talking about the actual food and drink products themselves, be they craft beer or a plate of food. And, yet, you see this sort of thing crop up from time to time. A pizzeria somehow thinks it can trademark the taste of its pizza. Or, more apropos for today's post, a German court that says taking pictures of plated food could violate the copyright of the chef.Plating food is now coming up again, with a post on the blog of the California Law Review site suggesting that plated food, if artistic enough, does in fact deserve copyright protection. While the entire post is detailed and thorough, the real question of whether plated food merits copyright protection has less to do with the creative aspect of plating -- of which there are some true creative aspects -- than with the question of fixability. To warrant copyright, a work must be both original in its creativity and created in a fixed medium. There are a couple of key historic cases that address what it means for a work to be in a fixed medium, helpfully laid out in this John Marshall Law School article.
In this time of coronavirus and social unrest, you'd think the government -- at all levels -- would engage in a little more care not to make either problem worse. Of course they haven't. Cops are arresting journalists and tear-gassing peaceful protesters as the President himself calls for domestic military action targeting US citizens. Dystopian fiction writers have been put on notice: the usual shit just isn't going to sell anymore. The ideas you thought wouldn't sustain suspension of disbelief are swiftly becoming reality.Stepping into the breach for reasons it will probably never be able to fully explain is the federal government, using nationwide protests as a reason to suspend as many rights as possible until everyone agrees the government is not an oppressive force -- even when personified as a white cop strangling a black man to death by putting his knee on his neck.Good luck with that. The government needs all the goodwill it can collect. It has apparently failed to realize the importance of harvesting goodwill in difficult times. And when the lawsuit inevitably gets filed, it will have to explain why it chose to do this massively stupid thing. Ryan Reilly reports for the Huffington Post about an apparent First/Fourth Amendment double-punch.
Here's an interesting tidbit: the latest move by Twitter to deal with a tweet related to President Trump is that it pulled down a Trump campaign video that was presented as a "tribute" to George Floyd, the Minneapolis man murdered by police last week, and whose senseless death has brought so many thousands to the streets across the US. The video remains on YouTube for the moment. It includes a lot of still photos and a few short video clips. It appears that the copyright holder on one (or perhaps more) of those images and clips likely didn't like it to be included for use by a President for a propaganda video they disagreed with, and filed the DMCA claim.I think there's a very strong fair use argument here for a whole variety of reasons (and, yes, I fully understand the moral claims that whoever took this photo may feel about it being used in this way, but copyright is not supposed to be used in that way).But seeing as this comes so soon after Trump's complete and total meltdown over Twitter and Section 230 after it added some additional context to one of his tweets -- leading him to state publicly that Section 230 should be revoked -- I do wonder if this move, in which a video was actually taken down (unlike with his tweets), will have him similarly rage against copyright law? Will we see an executive order demanding an impossible reinterpretation of Section 512's notice-and-takedown provisions? Or does it not work like that?Of course, what this really demonstrates is why Trump and his fans should absolutely support Section 230, rather than pan it. Section 230, among other things, gives Twitter the freedom to decide how best to run its site, and to date, that's meant bending over backwards to keep the President's tweets online and available for people to view. However, Section 230 explicitly exempts intellectual property. For copyright, there's Section 512 of the DMCA, which is much, much weaker than CDA 230. With CDA 230, there's an immunity -- if there's 3rd party content, a site is not liable and also a site cannot be liable for its moderation choices. With DMCA 512, it's a "safe harbor." Where if you meet certain conditions, you can then be protected. But one element of that safe harbor, is that to retain it you have to take down the content upon receipt of a valid DMCA takedown notice.I've long argued that this aspect of the DMCA 512, in which the threat of significant liability from the state (i.e., the court system) raises serious 1st Amendment issues. That's because the law heavily favors silencing content with the threat of massive liability if you don't. And the system is heavily imbalanced as there's no effective punishment for false notices, meaning the system is weighted very, very heavily in favor of censorship.So here's a good point to compare how the two different "intermediary liability" regimes actually work. Under CDA 230, free speech is much more protected. Indeed, the very nature of it is that the courts under 230 cannot force sites to take down speech (they leave that choice up to the sites themselves). Under DMCA 512, however, the liability issue makes it very, very easy to issue bogus takedowns that lead to content being removed.It's interesting that this is all coming a week after Trump's bizarre tirade against 230, and the same week that the Senate argued that we should make the censoring power of the DMCA even more censorial.It seems a much better approach would be to leave 230 alone, but fix DMCA 512 by getting rid of the imbalanced nature in putting tremendous state pressure on websites to remove content based solely on an accusation of infringement.
Months into the global pandemic, governments, think tanks, and companies have begun releasing comprehensive plans to reopen the economy, while the world will have to wait a year or longer for the universal deployment of an effective vaccine.A big part of many of these plans are digital tools, apps, and public-health surveillance projects that could be used to contain the spread of COVID-19. But even if they’re effective, these tools must be subject to rigorous oversight and laws preventing their abuse. Corporate America is already contemplating mandatory worker testing and tracking. Digital COVID passports that could grant those with immunity or an all-clear from a COVID test the right to enter stores, malls, hotels, and other spaces may well be on the way.We must be ready to watch the watchers and guard against civil rights violations.Many governments and pundits are turning to tech companies that are promising digital contact tracing applications and services to augment the capacity of manual contact tracers, as they work to identify transmission chains and isolate people exposed to the virus. Yet civil society groups are already highlighting the serious privacy implications of such tools, underscoring the need for robust privacy protections.The potential for law enforcement and corporate actors alike to abuse these tracking systems is just too great to ignore. For their part, most democratic governments have largely recognized that the principle of voluntary adoption of this technology — rather than attempts at state coercion — is more likely to encourage use of these apps.But these applications are not useful unless significant percentages of cellphone users use them. An Oxford University study suggests that for a similar app to successfully suppress the epidemic in the United Kingdom, 80 percent of British cellphone users would have to use it, which equates to 56 percent of the overall UK population. If the numbers for a digital contact tracing program to succeed stateside were similar, that would mean activating more than 100 million users.The level of adoption will dictate just how well these technologies prevent the spread of the virus, but no matter how widespread such voluntary adoption may be, there is still potential for coercion, abuse, and targeting of specific users and communities without their consent. Some companies and universities are already planning to develop their own contact tracing systems and require their employees or students to participate. The consulting firm PricewaterhouseCoopers is advising companies on how to create these systems, and other smaller tech firms are designing Bluetooth beacons to facilitate the tracking of workers without smartphones.An unaccountable regime of COVID surveillance could represent a great near-term threat to civil rights and privacy. Already marginalized communities suffering most from this crisis are the most exposed to the capricious whims of corporate leaders eager to restart supply chains and keep the manufacturing and service sector operating.Essential workers are subject to serious health risks while doing their jobs during a pandemic, and employers mandating use of these technologies without public oversight creates another risk to worker rights. This paints a particularly tragic picture for the Black community which has been disproportionately affected by the pandemic in terms of sickness, death, and unemployment.Black and Latinx people are more likely to work as cashiers in grocery stores, in nursing homes, or in other service-industry jobs that make infection far more likely. Many such workers are already subject to pervasive and punitive workplace surveillance regimes. But now, there may be real public-health equities at play. When these workers go to work, they have to do so in close proximity to others. Employers must protect them and digital tracking tools may well be part of saving lives. But that balance ought to be struck by public-health officials and worker-safety authorities in consultation with affected employees.This system of private-health surveillance may not just affect workers. Grocery store, retail, and restaurant owners, eager to deploy this kind of technology to regain the confidence of shoppers, may well see the logic in incentivizing widespread public deployment as well.Those same stores could offer a financial incentive to customers who can prove they have a contact-tracing app installed on their phone, or they could integrate it into already existing customer loyalty apps. Coordinated efforts from businesses to mitigate losses due to sick workers or the threat of repeated government shutdowns could make incentivizing or demanding COVID-passports worth the investment to them. We may well find ourselves in a situation where a digitally checkpointed mall, Whole Foods, or Walmart feels like an oasis — the safest place in the world outside our homes.Unaccountable deployment of these systems threatens to create further divides between workers and consumers, the tracked and untracked, or perilous division between those who can afford repeated testing and those who can’t.So far, few officials have weighed these tradeoffs. As of yet, the only federal legal guidance on these questions has come from the Equal Employment Opportunity Commission, which has ruled that employers can legally institute mandatory temperature checks and other medical exams as conditions of continued employment.Lawmakers have to do more. They must provide protections for the unauthorized use of this information and not allow access to places of public accommodation - a core civil right - to be determined by a mere app. We must seriously consider what it would mean for a free society, should businesses find it makes financial sense to invest in their own health-surveillance systems or deny people access to corner markets or grocery stores if they aren’t carrying the right pass on their person.We do not have to be resigned to the deployment of a permanent state surveillance apparatus or the capriciousness of the private sector. If our post-9/11 experience is a guide, then we know that unaccountable surveillance infrastructure implemented during a crisis is wildly difficult to dismantle.We must not construct a recovery that casts a needless decades-long shadow over our society, entrenches the power of large corporations, and further exacerbates class and racial divides. Governments must proactively decide the permissible uses and limits of this technology and the data it collects, and they must demand that these surveillance systems, private or otherwise, be dismantled at the end of the crisis.Gaurav Laroia is the Senior Policy Counsel at consumer Group Free Press, working alongside the policy team on topics ranging from internet-freedom issues like Net Neutrality and media ownership to consumer privacy and government surveillance.
President Trump is fond of non-disclosure agreements. He's been this way for far longer than he's been president, but his insistence on foisting them on anyone who has worked for him has become problematic now that he's the ultimate public figure.Some of these NDAs have been broken inadvertently during the course of dubious lawsuits filed by former Trump associates against journalists. In other cases, the DOJ itself has gotten involved, trying to invoke possibly non-existent agreements with the government to block publications by former Trump staffers.Now, a former Trump campaign staffer is in court challenging the legality of the NDA she signed when managing phone banks for Trump before moving up to be his director of Hispanic engagement. She argues the NDAs serve no purpose but to block speech critical of her former employer. (Non-paywalled version here.)
The 2020 Adobe Graphic Design School has 3 courses to help you learn more about top Adobe apps and elements of graphic design. The first course covers Adobe Photoshop and all aspects of the design process from the importing of images right through to final production considerations for finished artwork. The second course covers Adobe Illustrator and will lead you through the design process, where you’ll learn a variety of ways to produce artwork and understand the issues involved with professional graphic design. The third course will help you discover how to harness the power of Adobe InDesign to develop different types of documents, from simple flyers to newsletters, and more. The bundle is on sale for $49.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Buckle up, because this one is going to be quite the long road trip, and I hope you won't rush to the comments without joining me on the entire journey first. But if you want a sense of where we're heading, here's the route map: the New York Times published an insane warmongering Senator's push to turn our own soldiers on protesting Americans, people (including many Times journalists) complained, the Times tried to defend the decision, and then admitted "mistakes were made," and a bunch of very silly people who pretend to be "serious thinkers" whined nonsensically about free speech and the "unwillingness to listen to opposing ideas," all while refusing to listen to opposing ideas. And all of it's nonsense: because editorial discretion is not a free speech issue and calling out a terrible paean to fascism is not an unwillingness to listen to "opposing ideas."Off we go.If you've been paying attention to the world of media in the past few days, you've probably already seen some of the loud and raucous debate. On Wednesday, the Times made the incredibly bad decision to publish the truly awful op-ed from Arkansas Senator Tom Cotton, suggesting that President Trump should send the US military to invade US cities, because, while the vast majority of protests around the nation have been peaceful (other than all those disrupted by police violence), there have been a few cases of some people breaking windows, setting fires, and stealing goods. There seems to be little evidence that this is as widespread a problem as the President and his supporters make it out to be, but in an effort to control the narrative, they're claiming that there's widespread violence and attacks overshadowing protests.Cotton's op-ed is bad. Just to take one bit of it, this paragraph is utter hogwash:
Apple has never looked too kindly upon users actually repairing their own devices. The company's ham-fisted efforts to shut down, sue, or otherwise imperil third-party repair shops are legendary. As are the company's efforts to force recycling shops to shred Apple products (so they can't be refurbished and re-used). As is Apple's often comical attacks on essential right to repair legislation, which only sprung up after companies like Apple, Microsoft, Sony, John Deere, and others created a grass-roots "right to repair" counter movement via their attempts to monopolize repair.Since 2017 or so, Apple has been harassing the owner of an independent repair shop in Norway named Henrik Huseby. After Norway customs officials seized a shipment of 63 iPhone 6 and 6S refurbished, replacement screens on their way to Huseby's repair shop, Apple threatened to sue the store owner unless they agreed to stop using aftermarket screens and pay a hefty settlement. Huseby decided to fight the case, and despite being out-manned five Apple lawyers to one, managed to win in 2018. At least initially.Apple then took its complaint to Norway's Court of Appeals, claiming that the refurbished parts used by Huseby "unlawfully appropriated Apple's trademark." The appeals court ruled in Apple's favor, and this week, the Norway Supreme Court upheld that decision (pdf). Needless to say, the US and overseas right to repair movement isn't particularly impressed by the court sanctioned bullying of a small business owner:
A COP IN EVERY HOUSE: that's the American dream. Maybe they can't enter the home, what with the Fourth Amendment and all, but they can be invited to every online get-together thrown by apps that promise neighborhood unity while asking law enforcement to get in on the action.Ring, Amazon's doorbell/camera company, has made the relationship between neighborhood "sharing" and law enforcement explicit. It's right there in the term sheets. While Ring takes the PR reins to steer the official discourse, it's offering cops steeper discounts on Ring cameras they can hand out to citizens in exchange for pushing citizens to sign up for Neighbors, Ring's snitch app. Once attached to the app, Ring makes sharing of camera footage seamless and encourages homeowners to report suspicious people and activities. Unsurprisingly, many of the suspicious people reported are minorities.It's not just Ring and Neighbors, as Citylab has discovered. Nextdoor -- a hyperlocal Facebook clone (and hotbed of bigotry) -- is courting cops as forcibly silenced partners in its plans to increase its user base.
As Techdirt has reported, the open access movement seeks to obtain free access to research, particularly when it is funded by taxpayers' money. Naturally, traditional academic publishers enjoying profit margins of 30 to 40% are fighting to hold on to their control. Initially, they tried to stop open access gaining a foothold among researchers; now they have moved on to the more subtle strategy of adopting it and assimilating it -- rather as Microsoft has done with open source. Some advocates of open access are disappointed that open access has not led to any significant savings in the overall cost of publishing research. That, in its turn, has led many to urge the increased use of preprints as a way of saving money, liberating knowledge, and speeding up its dissemination. One reason for this is a realization that published versions in costly academic titles add almost nothing to the freely-available preprints they are based on.An excellent new survey of the field, "Preprints in the Spotlight", rightly notes that preprints have attained a new prominence recently thanks to COVID-19. The urgent global need for information about this novel disease has meant that traditional publishing timescales of months or more are simply too slow. Preprints allow important data and analysis to be released worldwide almost as soon as they are available. The result has been a flood of preprints dealing with coronavirus: two leading preprint servers, medRxiv and bioRxiv, have published over 4,500 preprints on COVID-19 at the time of writing.The publishing giant Elsevier was one of the first to notice the growing popularity of preprints. Back in 2016, Elsevier acquired the leading preprint server for the social sciences, SSRN. Today, Elsevier is no longer alone in seeing preprints as a key sector. A post on The Scholarly Kitchen blog describes how all the major publishers are active in preprints:
Even a cursory review of just the headlines on our posts about YouTube's ContentID will demonstrate a theme. That theme mostly centers around how the automagic copyright detection system that YouTube put in place is mostly useful for creating collateral damage on non-infringing material, often times at the expense of the rightsholders themselves. Whenever this happens, there are usually apologies issued, blame cast on ContentID for the mistake, and then everything continues on with no changes made. Which is absurd. These situations identify a flaw in the ContentID system, or the use of an automated system of any kind, and yet we never do anything about it.Which is why this sort of thing keeps happening. The most recent example of this concerns tons of Super Mario Bros. speedruns being issued copyright notices because the YouTube channel for the Guinness Book of World Records uploaded a record-holding speedrun itself. From there, ContentID did its thing.
We are so hip here at Techdirt that we've been writing about Section 230 long before it was cool. But even though everyone and their President seems to be talking about it these days, and keen to change it, it does not seem like everyone necessarily knows what it actually says or does. Don't let this happen to you!The embedded video below is of a presentation I gave earlier this year at ShmooCon where I explained the magic of Section 230 through the lens of online cat pictures. As we head into more months of lockdown, our need for a steady supply of cat pictures has never been more important. Which means Section 230 has never been more important.In this presentation I explain why we have Section 230, what it does, why it works, and how badly we jeopardize our supply of online cat pictures (as well as a lot of other good, important stuff) if we mess with it.Tune in!
For many years, we've said that if the public library were invented today, the book publishers would sue it out of existence. It appears that the big book publishers have decided to prove me right, as they have decided to sue the Internet Archive for lending ebooks without a license.Over the last few months, we've discussed why publishers and authors were overreacting in their verbal attack on the Internet Archive's decision to launch a "National Emergency Library" to help out during a pandemic. While many publishers and authors declared this to be "piracy," that did not square with reality. The Internet Archive was relying on a variety of precedents regarding the legality of libraries scanning books and lending books, as well as around fair use, to argue that what it was doing was perfectly legal. Indeed, the deeper you looked at the issue, the more it looked like the publishers and authors were upset with the Internet Archive for being a library, since libraries don't need special licenses to lend out books.In other words, this was yet another attack on property rights. Publishers and some authors were trying to argue that the Internet Archive needed extra licenses to lend out legally made scans of legally obtained books. And to respond to a few common criticisms of the NEL: they were doing this since so many libraries and schools around the world were shuttered due to the pandemic, meaning that millions of books were literally collecting dust on shelves, un-lendable. More importantly, the NEL was not targeting recent releases (all books in the NEL are over 5 years old, and the commercial life of nearly every book is much shorter than that). Finally, contrary to some claims, the books in the NEL are not "bit for bit copies" of high quality ebooks. They are relatively low quality scans. If a more legit version is available, nearly any reasonable person would go for that instead (indeed, I've personally purchased multiple books after first borrowing copies from the Open Library before deciding to get a permanent copy). Also, most of the books available in the NEL are simply not available at all in ebook format, meaning that they're not available at all during the pandemic for many people.There was some chatter that publishers might choose not to sue the Internet Archive over this, because losing this fight would seriously challenge a bunch of other copyright claims that they rely on. But, come on. These guys are so obsessed with copyright, how could they not sue? So, earlier this week, all the big publisher teamed up to sue the Internet Archive, represented by former RIAA lawyer Matt Oppenheim, who has a long history of being on the bad side of nearly every big copyright case.Here's the thing, though: the publishers didn't just decide to sue over the National Emergency Library: instead they're also suing over the entire "Controlled Digital Lending" process. That's the program that the Authors' Guild has been whining about, which is the underpinning of the NEL. The CDL/Open Library program involves letting libraries lend out digital books if they retain a physical copy of the book on the shelf (so maintaining a one-to-one relationship between books lent out and books that the libraries have in their possession). The NEL took away that limitation, with the argument that this was allowed due to their reading of fair use in the midst of a pandemic with so many books locked up.While I support the NEL -- I can recognize that courts may not buy their fair use arguments. On the CDL/Open Library front, though, that's just blatantly attacking a very standard library procedure. There can be no argument of "lost revenue" from the CDL, unless you're attacking the very basis of libraries themselves. And that's what the lawsuit appears to do.
Facebook is implementing end-to-end encryption in its Messenger service. This has made a number of government officials unhappy. Claiming this will lead to an increase in child sexual exploitation, multiple governments (including our own) have pounded their respective tables in Facebooks' direction, demanding the company not give its users secure communications.Now, it's more than just government officials. The BBC reports some of Facebook's shareholders have been swayed by the international table-pounding.
The hit FRESHeBUDS earbuds are back and better than ever with the new Pro model. Even more water and sweat resistant to stand up to any outdoor activity, these new buds have enhanced battery life, and automatically pair to your phone when pulled apart so you don't have to go through any setup. They're on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Center for Democracy and Technology appears to be the first out of the gate in suing Donald Trump to block his silly executive order on Section 230. In the aftermath of the EO being issued I know some people wondered if it was actually worth suing over, since it actually did so little in practice. But, as I discussed in this week's podcast, it can still be used to create havoc.The basic argument in the lawsuit is that the executive order is clearly retaliatory against Twitter for its 1st Amendment protected speech in fact-checking the President, and thus violates the 1st Amendment:
On one end, you've got wireless carriers implying that 5G is some type of cancer curing miracle (it's not). On the other hand, we have oodles of conspiracy theorists, celebrities, and various grifters trying to claim 5G is some kind of rampant health menace (it's not). In reality, 5G's not actually interesting enough to warrant either position, but that's clearly not stopping anybody in the post-truth era.But it's all fun and games until somebody gets hurt.Baseless conspiracy theories about the health impact of 5G have gone next level during the pandemic. To the point where facts-optional nitwits are not only burning down cell towers in the UK, but putting razor blades and needles underneath protest posters on telephone poles (apparently you solve public health risks by... putting peoples' health at risk?). We've seen a few attacks on telecom infrastructure and employees here in the States, but it's been notably worse in the UK, where telecom engineers are now being routinely insulted and threatened:
With protests sparked by the killing of George Floyd by Officer Derek Chauvin erupting all over the nation, states are beginning to ask the National Guard to step in. The epicenter of these demonstrations is Minneapolis, Minnesota, where the National Guard has already been deployed to handle protests and enforce the curfew.But it's not just Minnesota. The military apparently has plans to intervene in several other states if necessary, as Ken Klippenstein reports for The Nation.
Defamation lawsuits often fail because of the high bar plaintiffs need to meet to prove defamation -- especially of a public figure. But, while there are lots of ways to lose a defamation lawsuit as a plaintiff, my favorite must certainly be the concept of a libel-proof plaintiff. This would be the notion that a plaintiff cannot be libeled or defamed if that plaintiff's reputation is so absolutely horrendous that further damage to it is impossible.
The intersection of school administration and law enforcement leads directly to insanity. All logic goes out the window when school administrators come across something that makes them feel slightly uncomfortable. Adding cops to the mix doesn't help anything. It only serves to turn every mildly misbehaving student into a criminal.We're here to talk about bombs. I'm sorry. Let me clarify. Not actual bombs. Drawings of bombs. Drawings created by students who are likely to draw bombs, guns, and general violent mayhem without actually wishing any of that on their fellow students.It took a couple of rounds in court to actually set this right. We've covered similar insanity over drawings of bombs here at Techdirt before, like the (temporarily) indefinite suspension of an autistic student who drew a bomb that looked like something straight out of a Looney Tunes short.This bomb drawing was a little more intricate but no more threatening than the round black bomb with a fuse we've all seen in any number of cartoons no one saw fit to prosecute. (h/t Ari Cohn)The Wisconsin Court of Appeals has finally ended the madness that began with terroristic threat and disorderly conduct charges being leveled against a middle school student.The decision [PDF] recounts the unfortunate chain of events that ultimately needed to be addressed by the penultimate level of the state's criminal justice system.
Once again, the people that serve the public have failed to understand the public. Trying to turn citizens into narcs never works out as well as government agencies envision. The end result is almost always a useless waste of limited resources.Eons ago when the coronavirus was still a concern, the mayor of New York City set up a snitch line for residents to report social distancing violations. Instead of hot neighbor-on-neighbor action, the city's 311 line received a bunch of middle fingers, dick pics, and Hitler memes.When Ohio's government set up a snitch line for employers to report employees who were collecting unemployment instead of coming to their COVID-encrusted workplaces, an enterprising coder put together a script that clogged the tip bin with algorithmically-generated garbage.Now that it's civil unrest all over the place in response to the latest killing of a black person by a white police officer, the Dallas PD is asking citizens to step up… and report other citizens for exercising their First Amendment rights. It has not worked out well for the police, as Caroline Haskins reports for BuzzFeed.
Over 90% of Americans feel like they have no control over their online privacy. It is not hard to understand why so many of us feel so powerless when it comes to using the Internet, nor is the solution to such a pervasive feeling all that complicated.It just boils down to rules and liability—or, in other words, making sure if a company violates your privacy per the law that there is an inescapable penalty. The clearer and more direct the path to holding a company accountable for violating your privacy—much like your physical health, property rights, emotional wellbeing, or other things held in legally enforceable trusts—the more confidence will return to the Internet marketplace.But we don't have these clear enforceable rights in today's American consumer privacy legal system for a vast majority of Internet privacy related activity. In fact, when the next Google or Facebook scandal rolls around in terms of user privacy, think back to the last one—likely just a few months old—and ask how much in damages the company paid and whether the company had to compensate individual people for the violation.In many cases, that answer is going to be no penalty, which then feeds into users' sense of powerlessness. But the fact that companies often have to pay no penalty, and the fact that we do not have laws in place to remedy these privacy harms, is a choice we have made. It is not the natural order of things, and it is not inevitable.We have, as a society, made decisions under our intellectual property laws where absolutely no liability is allowed to promote another profitless value, namely our freedom of expression. For example, the practice of criticizing a film on YouTube while playing portions of it in the background is considered a fair use. This means, despite copyright holders having the exclusive rights over the public performance of their work, we have decided to extinguish liability when it involves the expression of criticism.In the absence of fair use, the critic using the film, as well as YouTube, would be directly liable for a lot of money for playing portions of it. However, we counterbalance and limit the economic right of the filmmaker in order to promote free speech values through fair use. In essence, we keep a liability-free zone for criticism and that is generally seen as a net positive for users. It also promotes the creation of open platforms, allowing those speakers to discover audiences and build engagement.But in consumer privacy we have not seen nearly the same benefit yielded back to consumers in exchange for the mostly liability-free zone. There is no race to the top in guarding consumer’s personal information, because the profit maximizing effort isn’t about augmenting our privacy, it is about tearing it down as much as possible for profit. This is why we keep getting these privacy scandals. There is no need to apply morality to the analysis as often happens when people observe corporate behavior, but rather the simple question of how profit maximization (which corporations have to pursue under the law) is being countered by law to reflect our expectations.When we look at the problem of consumer privacy from this angle, it becomes fairly clear that private rights of action for consumer personal privacy would be transformative. No longer would a corporation view experiments with handling personal information as a generally risk free profit making proposition if financial damages and a loss of profit were involved.Industry wants a Consumer Privacy Law—Just so Long As You Can’t Sue ThemThe long road of industry opposition—and the extreme hypocrisy of now pretending to endorse passage of a comprehensive consumer privacy law—is worth reflecting on in order to understand why in fact we have no law today.If we go back a little over a decade to a privacy scandal that launched a series of congressional hearings, we find a little company called NebuAd that specialized in deep packet inspection.NebuAd’s premise was scary in that it proposed to allow your ISP to record everything you do online and then monetize it with advertisers. I was a staffer on Capitol Hill when NebuAd came to Congress to explain their product, and still remember the general shock at the idea being proposed. In fact, the idea was so offensive it garnered bi-partisan opposition from House leaders and ultimately led to the demise of NebuAd.The legislative hearings that followed the growing understanding of “deep packet inspection” led to discussions of a comprehensive privacy bill in 2009. But despite widespread concern with developing industry practices as the technology was evolving, we never got anywhere out of concern for the freemium model of Internet products. It is hard to remember this time, but back then the Internet industry was still a fairly new thing to the public and Congress.The iPhone had just launched two years earlier, and the public was still in the process of transitioning from flip phones to smartphones. Only three years prior had Facebook become available to the general public. Google had only a small handful of vertical products, the newest being Google Voice—which allowed people to text for free at a time when each text you sent cost a fee.All of these things were seen as net positive to users, yet all hinged on the monetization of personal information being relatively liability free. So for years policymakers, including an all out effort by the White House in 2012, searched for a means to balance privacy with innovation. Companies generally known as “big tech” today were still very sympathetic entities in that the innovations they continued to produce were seen as both novel and useful to people. Therefore, their involvement was actively solicited by the White House in trying to jointly draft a means to promote privacy while allowing the industry to flourish.Ultimately, it was a wasted effort because what industry actually wanted was the liability free zone to be baked into law with little regard to increasing degradation of user privacy. It used to be that most of the Internet companies still had competitors with each other forcing them to try to be more attractive to users with greater privacy settings. Even Google Search was facing a direct assault by Microsoft with their fairly new Bing product.As efforts to figure out a privacy regime for Internet applications and services were being stalled by the Internet companies, progress was being made with the substantially more mature and the already regulated Internet Service Provider (ISP) industry.Congress had already passed a set of privacy laws for communications companies under Section 222 of the Communications Act, so a great many ISPs, being former telephone companies, had a comprehensive set of privacy laws applicable to them (including private rights of action). But their transition into broadband companies began to muddy the waters, particularly as the Federal Communications Commission started to say in 2005 that broadband was magically different and therefore should be quasi-regulated.Having learned nothing from the fiasco of NebuAd and potentially having “deep packet inspection” banned for ISPs, other privacy invasive ideas kept getting rolled out by the broadband industry. Things such as “search hijacking”—where your search queries were monitored and rerouted—became a thing. AT&T began forcibly injecting ads into WiFi hotpots at airports, wireless ISPs preinstalled “Carrier IQ” on phones to track everything you did (which ended when people sued them directly under a class action lawsuit), and Verizon invented the “super-cookie,” prompting a privacy enforcement response from the FCC in 2014.Even after the FCC stopped treating broadband as uniquely different from other communications access technology in 2015, the industry continued to push the line. In that same year telecom carriers partnered with SAP Consumer Insight 365[9] to “ingest” data from 20 to 25 million mobile subscribers close to 300 times every day (we do not know which mobile telephone companies participate in this practice, as that information is kept a secret). That data is used to inform retailers about customer browsing, geolocation, and demographic data.So unsurprisingly, the FCC came out with strong, clear ISP privacy rules in 2016 that continued the long tradition of privacy protections for our communication networks.However, the heavily captured Congress, which had never taken a major pro-privacy vote on Internet policy in close to a decade, quickly took action to repeal the widely supported FCC privacy rules on behalf of AT&T and Comcast. Ironically, the creation of ISP privacy rules by the FCC only happened because Congress created a series of privacy laws, including private rights of action, for various aspects of our communication industry more than a decade prior.While many of the leaders of the ISP privacy repeal effort claim to be foes of big tech, they have done literally next to nothing to move a consumer privacy law. In fact, all they did was solidify the capture of Congress by giving AT&T and Comcast a reason to team up with Google and Facebook in opposing real privacy reform.EFF witnessed this joint industry opposition first hand as we attempted to rectify the damage Congress did to broadband privacy with a state law in California. In fact, between ISPs and big tech we had absolutely no new privacy laws passed in the states in 2017 in response to Congress repealing ISP privacy rules.Despite the arrogant belief they could sustain perpetual capture at the legislative level, along came an individual named Alastair McTaggert who personally financed a ballot initiative on personal privacy that later became the California Consumer Privacy Act (CCPA).While they could “convince” a legislator of the righteousness of their cause with political contributions, they had no real means to convince the individual that the status quo was good. After Cambridge Analytica and wireless carriers selling geolocation to a black market for bounty hunters, virtually no one thinks this industry should be unregulated on privacy.So rather than continue to publicly oppose real privacy protections, the industry has opted to pretend it supports a law just so long as it gets rid of state laws (including state private rights of action), putting all our eggs into the basket of a captured regulator. In other words, they only will support a federal privacy law if it further erodes our personal privacy rather than enhance it.This opening offer from industry is a wild departure from other privacy statutes that have all included an individual right to sue such as wiretaps, stored electronic communications, video rentals, driver’s licenses, credit reporting, and cable subscriptions. Not to miss their marching orders, industry-friendly legislators were quick to put together a legislative hearing on consumer privacy that literally had no one representing consumers.But this game by industry, where so long as they can hold and finance enough legislators to prevent any real law from passing, will only last so long. Afterall, their effort to ban states from passing privacy laws is effectively dead once the Speaker of the House from California made it clear she would not undermine her own state’s law on behalf of industry.Furthermore, Senator Cantwell, a leader on the Senate Commerce Committee, has introduced comprehensive legislation that includes a private right of action and more than a dozen Senators led by Senator Schatz have endorsed the concept supported by EFF of creating an information fiduciary. As more and more legislators make publicly clear the parameters of what they consider a good law, it becomes harder for industry to sustain the behind the scenes opposition. But we are still far away from the end, which means more has to be done in the states until enough of Congress can break free of the industry shell game.If We Do Not Restore Trust in Internet Products, People Will Make Less Use of the Internet and That Comes with Serious ConsequencesAs we wrestle with containing COVID-19, a solution being proposed by Apple and Google in the form of contact tracing is facing a serious hurdle. A majority of Americans do not want to use health data applications and services from these companies because they do not trust what they will do with their information. Since they can’t directly punish these companies for abusing their personal health data, they are exercising the only real choice they have left: not to use them at all.Numerous federal studies from federal agencies such as the Department of Commerce, the Federal Trade Commission, and the FCC all point to the same end result if we do not have real privacy protections in place for Internet activity. People will simply refrain from using applications and services that involve sensitive uses such as healthcare or finances. In fact, lack of trust in how our personal information is handled has a detrimental impact on broadband adoption in general. Meaning a growing number of people will just not use the Internet at all in order to keep their personal information to themselves.Given the systemic powerlessness users feel about their personal information when they use the Internet, the dampening effect it has on fully utilizing the Internet, and the loss of broadband adoption, it is fairly conclusive that the near liability free zone is an overall net negative as a public policy. Congress should be working to actively give users back their control, instead of letting the companies with the worst privacy track records dictate users’ legal rights. Any new federal data privacy law must not preempt stronger state data privacy rules and contain a private right of action.While special tailoring has to be done for startups and new entrants with limited finances to ensure they can enter the market under the same conditions Google and Facebook launched, this is not true for big tech.Establishing clear lines of liability and rules for major corporate entities, efforts to launch the next privacy invasive tech will be scrutinized by corporate counsel eager to shield the company from legal trouble. That ultimately is the point of having a private right of action in law. It is not to flood companies with lawsuits, but rather for them to operate in a manner that avoids lawsuits.As users begin to understand that they have an inalienable legal right to privacy when they use the Internet, they will begin to trust the products with greater and more sensitive uses that will benefit them. This will open new lines of commerce as a growing number of users willingly engage in deeply personal interactions with next generation applications and services. For all the complaints industry has about consumer privacy laws, the one thing they never take into account is the importance of trust. Without it we start to lose the full potential of what the 21st century Internet can bring.Ernesto Falcon is Senior Legislative Counsel at the Electronic Frontier Foundation with a primary focus on intellectual property, open Internet issues, broadband access, and competition policy.
Another day, another example of copyright acting as censorship. The folks over at Unicorn Riot have been covering the protests around the country, but apparently they can't do that as they'd like because copyright is getting in the way. Unicorn Riot announced on Twitter that video interviews they had conducted and posted have been pulled down from both Facebook and YouTube due to copyright claims such as this one:
Gaining new knowledge doesn't have to be hard. In fact, it can be easy and fun. Learnable is an eLearning platform providing you with handpicked lessons and courses on coding languages. Choose from a wide range of different courses to suit your goals — C#, C++, PHP, Swift, Java, SQL and more! And if all this coding stuff gets kind of tiring, you can even take a break with the built-in Meditation Mini-App. Learnable is available on both iOS and Android, meaning you can learn to code anytime and anywhere. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
First off, I would like to thank Mike Masnick and Techdirt for publishing my post on the George Floyd killing and the (in my eyes) justifiable destruction of police property as an answer to years of injustice and "bad apple" excuses. Very few sites would have published such a post. Most would have rejected it after reading the title.I also appreciate the commenters who weighed in, including those who disagreed with me. It was a strong stance for me to take and I expected to be drowned in criticism. That I wasn't buried by critics perhaps demonstrates my points were well-made. Or it may just indicate the general public is sick and tired of cop bullshit -- bullshit they far too often walk away from, thanks to generous union contracts, the almost-obligatory judicial application of qualified immunity, or the continued sheltering of police officers from personal responsibility by legislators.But I did want to respond to one comment in the thread in particular. This comment suggested I was off-base and that peaceful protests are productive and have resulted in systemic changes. Despite the evidence I had laid down that being peaceful and seeking change through acceptable routes has been a net loss over the last 50+ years, a commenter suggested otherwise.This is the central argument of the comment submitted by one of our many anonymous commenters. (Just a reminder, we love anonymous commenters and would never demand you give us all your vitals in exchange for your ability to comment on articles. We also allow you to turn ads off if you wish with no financial obligation. That being said, there are multiple ways to support this fiercely independent site, so click thru if you'd like to help. Thanks!)
For a long time, we've noted how broadband usage caps are bullshit. They don't actually help manage congestion, they have nothing to do with "fairness," and are little more than glorified price hikes on the backs of captive customers in uncompetitive markets. Worse, they can be abused anti-competitively by incumbent broadband providers, one of the major triggers of the net neutrality debate.For example, AT&T for a while has made its own streaming TV services exempt from its usage caps, while competing streaming services (Netflix, Amazon, whatever) count against a user's monthly data allotment. This gives AT&T a distinct advantage in that users are incentivized to avoid competing services lest they face completely arbitrary and unnecessary usage limits and fees. It's bullshit. It has always been bullshit.AT&T has added another layer to this bullshit cake. The company has long experimented with something called "sponsored data," which lets companies pay AT&T extra if they want to be exempt from AT&T's (again, completely arbitrary and unnecessary) broadband usage caps. This adds yet another anti-competitive layer to the equation by letting a deep pocketed company (say: ESPN) get a distinct advantage over smaller startups that can't afford to pay AT&T's toll.Last week AT&T launched yet another streaming TV service, HBO Max. This service also won't count against AT&T's usage caps and overage fees, AT&T confirmed to The Verge:
Back in 2013, we made clear our concerns with the Italian communications watchdog AGCOM setting up new administrative copyright enforcement powers that would allow them to simply up and declare sites to be infringing, at which point ISPs would be ordered to block websites. Soon after that Italy's public prosecutor seemed to decided that part of his job was also to order websites blocked based solely on the public prosecutor's say so.In the latest such order from the Public Prosecutor's office declaring a list of sites to be infringing, apparently Italy has decided that the famous and wonderful Project Gutenberg website, which is a repository of public domain books, must be blocked. I don't know about the other 27 sites listed in the order, but Project Gutenberg is no piracy site. Yet here it is at number 25 on the list:They even go to the trouble of looking up the whois info. You would think that maybe someone would recognize that a site founded in 1996 maybe is not a giant piracy site:The Italian Library Association is asking what the fuck is going on (translation via Google Translate):
When we last talked about the Geo Group, a company making hundreds of millions of dollars running private prisons, one of its executives was attempting to improve the company's reputation by constantly removing all the dirty from the Wikipedia page about the company. In trying to do this, of course, the company actually amplified the controversies listed on Wikipedia and, having been caught trying to scrub the internet of its own sins, found itself in headlines as a result. At present, the Wikipedia page still lists those controversies, but more on that in a moment.Because the latest bit of news from Geo Group is that it is suing Netflix over the use of its logo in a fictional prison in Messiah.
Keep your mitts off cellphones if you don't have a warrant. That's the message at least one court is sending to law enforcement. A 2014 decision by the US Supreme Court introduced a warrant requirement for cellphone searches. Since then, cops mostly seem to be complying with the mandate. Of course, this half-assed analysis of mine rests solely on federal cases I've managed to catch drifting downstream in the internet flotsam, so it's far from conclusive. But -- unlike the SCOTUS decision erecting a warrant requirement for historic cell site location info -- there doesn't seem to be much gray area in the Riley decision for law enforcement to explore.But what exactly is a "search" in the Fourth Amendment/Riley context? It depends on which court you ask. The most straightforward reading of the Riley decision would be a warrant requirement for a search of a phone's contents. But a couple of courts have read this decision even more narrowly. Riley doesn't just cover full-fledged searches of device contents. It also covers more sneaky peeks of suspects' phones.In 2016, a federal court ruled that the FBI's opening of a flip phone (roughly one week after the suspect's arrest) violated the Fourth Amendment. Even the recognition that the home screen of a phone was subject to a "diminished" expectation of privacy couldn't save the feds' search. The court said the FBI's search of the unexposed area of the phone -- the closed screen -- was a search and subject to Riley. To rule otherwise would be to allow the government to use similar cursory examinations to dodge the warrant requirement or unlawfully seek info to buttress affidavit claims in warrant requests for a more thorough search.
This week, we've got a special cross-post from 16 Minutes On The News — an excellent tech podcast by a16z that's well worth subscribing to. For the latest episode, host Sonal Chokshi interviewed Mike all about Section 230 and Trump's recent executive order about social media — and as you might imagine, it took a lot longer than 16 minutes! We've got the complete interview here on the Techdirt Podcast.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
The recent Copyright Office report on Section 512 of the DMCA (the notice and takedown provisions) has been frustrating on many levels, including the fact that it simply ignores that the public is a stakeholder (actually the main stakeholder) in copyright policy. But one of the most frustrating parts of the report is that it ignored a ton of testimony (including some provided by me) about how frequently the 512 notice-and-takedown process is abused (either on purpose or accidentally) to take down non-infringing content. The Copyright Office acts as if this is a fringe issue, when the data suggests it's a massive problem impacting millions.And just to put a pretty fine point on it, you probably heard about or (hopefully) saw the launch this weekend of the SpaceX Dragon capsule, with the first private manned mission to space, that was done in conjunction with NASA. It was pretty cool, and a ton of people tuned in to watch it live. Of course, many also tuned in the previous Wednesday to try to watch the original planned launch, before it got scrubbed due to weather. NASA had a wonderful live stream going for both (which I watched). And works produced by NASA are in the public domain -- which is why many other broadcasters were easily able to use them as well.But because the numbskulls at NBC Universal work with the default mindset that everything must be owned, and if everything must be owned, then obviously anything that NBC Universal broadcasts must be owned by NBC Universal, it made bogus copyright claims on a ton of others using NASA's footageincluding NASA itself leading to NASA's own public domain video being blocked on NASA's own YouTube page.
As Facebook employees stage a digital walk-out and make their thoughts known about the social media giant’s choice to not intervene in any way on “political posts”, especially those of President Donald Trump, some have called for the newly-created Oversight Board to step up and force a change in Facebook. While the official answer is that they can’t start (because supposedly they haven’t given out laptops yet), the real and very simple reason why the Facebook Oversight Board won’t get involved is because it can’t. It’s not created to function that way, it’s not staffed for something like this, and ultimately, due to its relationship with Facebook, anything it would say on this matter right now would be taken in an advisory capacity at best. Facebook, understandably not wanting to actually give any of its power away, played confidence games with the idea of external, independent oversight, and it’s clear that they fooled a lot of people. Let me explain.In three-card-monte, the huckster keeps shuffling three playing cards until the victim is likely to guess wrong on where the “money card” may be hiding, and proceeds to flop the cards one by one. For Facebook’s prestidigitation on content moderation, last month’s announcement of the initial 20 highly-regarded experts tapped as members for its independent oversight board is the second card flop, and predictably, the money card is not there.The ongoing sleight of hand performed by Facebook is subtle but fundamental. The board was set up as truly independent, in every way, from member to case selection and to the board’s internal governance. In terms of its scope and structure, it is guided by previously-released bylaws to primarily handle a small set of content removal cases (which come up to the board after exhausting the regular appeals process), and dictate Facebook to change its decisions in those cases. To a much lesser extent, the Board can, although time and resources are not allocated for this, provide input, or recommendations about Facebook’s content moderation policies, however, Facebook is not obligated in any way to follow those policy recommendations, but to simply respond in 30 days and talk about any action it may take.In the pages of the San Francisco Chronicle’s Open Forum, and elsewhere, I and others have called attention to this empty action as far back as September 2019, at the first card flop, the public release of the Board’s charter and bylaws. The project continued unabated and unchanged as friendly experts extolled the hard work of the team and preached optimism. Glaring concerns over the Board’s advisory-at best, non-binding overall power, not only weren’t addressed, but actually dismissed by cautioning that board member selection, last month’s flop, would be where the money card is. Can you spot the inconsistency? It doesn’t matter if you have the smartest independent advisors if you’re not giving them the opportunity to actually impact what you do. Of course, the money card wasn’t there.In early May, the Menlo Park-based company released the list of its Oversight Board membership, with impressive names (former heads of state, Nobel Prize laureates and subject matter experts from around the world). Because the Board is truly independent, Facebook’s role was minimal, beyond coming up with said structure and bylaws with the consultation of experts from around the world (full disclosure: the author was also involved in one round of consultations in mid 2019), it only directly chose the 4 co-chairs who then were heavily involved in the choice of the other 16 members. A lot of chatter around this announcement focused, predictably, on who the people are; is the board diverse; is it experienced enough, etc, while some, have even focused on how independent the board truly is. As the current crisis is showing, none of that matters.As we witness the Board’s institutionalized, structural and political inability to perform oversight it is becoming entirely clear that Facebook is not, at all, committed to fixing its content moderation problems in any meaningful way, and that political favor is more important than consistently applied policies. There is no best case scenario anymore as the Board can only fail or infect the rest of the industry. And what is a lose-lose for all of us will likely still be a win-win for Facebook.The bad case scenario is the likeliest: the Board is destined to fail. While Zuckerberg’s original ideas of transparency and openness were great on paper, the Board quickly turned into just a potential shield against loud government voices (such as Big Tech antagonist Sen. Hawley). Not only is that not working, Sen. Hawley responded to the membership list with even harsher rhetoric, but the importance placed on the optics versus the reality of solving this problem is even more obvious now. Giving the Board few, if any, real leverage mechanisms over the company can at most build a shiny Potemkin village and not an oversight body. If we dispense with all the readily-available evidence to the contrary, and give Facebook the benefit of the doubt that it tried, the alternative reasons for this rickety and impotent construction are not much better. It may be because giving a final say over difficult cases, the Board’s main job, is not something Facebook was comfortable with doing by itself anyway (and who can blame them given the pushback the platform gets with any high-profile decision). Or it may be because of a bizarre allegiance to the flawed constitutional law perspective that Facebook can build itself a Supreme Court, which makes the Board act as an appellate court of sorts, with a vague potential for creating precedent rather than truly providing oversight.If the Board’s failure doesn’t tarnish the perspective of a legitimate private governance model for content moderation, there’s a lot to learn on how to avoid unforced errors. First, we can safely say that while corporations may be people, they are definitely not states. Creating a pseudo judiciary without any of the accouterments of a liberal-democratic state, such as a hard-to-change constitution, co-equal branches and some sort of social contract is a recipe for disaster. Second is a fact that theory, literature and practice have long argued: structure fundamentally dictates how this type of private governance institution will run. And with an impotent Board left to mostly bloviate after the fact, without any real means to make changes to the policies themselves, this structure clearly points to a powerless but potentially loud “oversight” mechanism, pushed to the front, as a PR stunt, but unequipped to deal with the real problems of the platform. Finally, we see that even under intense pressure from numerous and transpartisan groups, and a potential openness to fixing a wicked problem, platforms are very unwilling to actually give up, even partly, their role and control in moderating content, but will gladly externalize their worst headaches. If their worst headaches were aligned with the concerns of their users, that would be great, but creating “case law” for content moderation is an exercise in futility, as the company struggles to reverse-engineer Trump-friendly positions with its long-standing processes. We don’t have lower court judges who get to dutifully decide whether something is inscribed in the board’s previous actions. We have either underworked, underpaid and scarred people making snap decisions every minute, or irony and nuance illiterate algorithms who are poised to interpret these decisions mechanically. And more to the point, we have executives deciding to provide political cover to powerful players rather than enforce their own policies, knowing full well they’re not beholden to any oversight, since even if already up and running, by the time the Board ruled on this particular case, if ever, the situation would have since no longer been of national importance.As always, there still is a solution. The Oversight Board may be beyond salvaging, but the idea of a private governance institution, where members of the public, civil society, industry and even government officials, can come together and try to reach a common ground for what the issues are and what the solutions might be, should still flourish, and should not be thrown away simply because Facebook’s initial attempt was highly flawed. Through continued vigilance and genuine, honest critiques of its structure and real role in the Facebook ecosystem, the Oversight Board can, at best, register as just one experiment of many, not a defining one, and we can soldier on with more diverse, inclusive, transparent, and flexible, industry-wide dialogues and initiatives.The worst case scenario is if the Board magically coasts through without any strong challenge to its shaky legitimacy, or its impotent role. The potential for this to happen is there, since there are more important things in the world to worry about than whether Facebook’s independent advisory body has any teeth. In that case Facebook intends to, one way or another, franchise it to the rest of the industry. And that would be the third, and final flop. However, as I hope you figured it out by now, the money card wouldn't be there either. The money card, the card that Facebook never actually intended on giving away or even showing us, the power over content moderation policies, was never embedded in the structure of the board, its membership or any potential industry copycats that could legitimize it. This unexpected event allowed us to take a peek at the cards, the money card is still where it was all along, in Facebook’s back pocket.David Morar is Associate Researcher at the Big Data Science Lab at the West University of Timisoara, Romania
The Ultimate Learn To Play Piano Bundle has 10 courses designed to take your music skills from beginner to advanced. You'll learn Music Theory, how to read and write music, how to play chords and chord progressions. You'll also learn about how to compose melodies, how to train your ears to recognize different types of chords and keys, and much more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Over the last few months, it's been weird to watch how any time we point out that there's no actual evidence of anti-conservative bias in the content moderation practices of social media, some in our comments absolutely lose their shit. One commenter, has been on a rampage in just the last week to declare me an evil liar for refusing to admit the "obvious" fact that there's anti-conservative bias in moderation. However when I and others ask these people for that evidence, it never seems to show up.I imagine they are not going to like this story either. A new study from CrowdTangle, a data analytics firm that is owned by Facebook, and has access to Facebook data, seems to suggest that if there's any bias, it goes the other way:
So we've noted repeatedly how AT&T's entry into the video space hasn't gone according to plan. First, the company spent so much money on mergers ($150 billion for Time Warner and DirecTV) in recent years, it effectively crippled itself with debt. Second, the company passed that merger debt on to most of its customers in the form of price hikes, which defeated the whole point of "cutting the TV cord." Third, AT&T launched so damn many confusing streaming brands simultaneously, it even confused the company's own employees.Collectively, this resulted in AT&T losing 3.43 million TV subscribers last year alone, which certainly wasn't the kind of sector domination executives originally envisioned.And there's every indication that things might get worse.As noted, AT&T already offers a very confusing array of TV services: HBO Go, HBO Now, AT&T Now, AT&T TV, AT&T WatchTV, AT&T U-verse (IPTV) and DirecTV (satellite). Last week the company launched yet another streaming platform, HBO Max. But there's trouble in paradise: because of contractual standoffs with Amazon and Roku, the service apparently won't be appearing on either platform at launch. Given Roku is the most popular streaming hardware in America by a pretty wide margin (39% market share in 2019), that's kind of a problem for AT&T:
We can't have nice things. We can't even have mediocre things. And, in the midst of a global pandemic, we can't even have basic things. The Bangladesh government hasn't exactly discovered the power of censorship. The government and this power are already acquainted. But with a novel virus in the air, the government has discovered it can silence speech more effectively.
Regular readers of Techdirt will be all too familiar with the problem of corporate sovereignty -- the ability of companies to sue entire countries for alleged loss of profits caused by government action. Also known as investor-state dispute settlement (ISDS), there have been indications that some countries are starting to drop ISDS from trade and investment treaties, for various reasons. But a worrying report from Corporate Europe Observatory suggests that we are about to witness a new wave of corporate sovereignty litigation. Hard though it may be to believe, these cases will be claiming that governments around the world should be reimbursing companies for the loss of profits caused by tackling COVID-19:
There was a window of opportunity for cops following the George Floyd killing. Floyd, suspected of nothing more than passing a fake $20 bill, was killed by Officer Derek Chauvin of the Minneapolis PD. Chauvin placed his knee on Floyd's neck until he was dead. This act lasted for nearly nine minutes -- and for nearly three minutes after Chauvin checked for a pulse and found nothing. Yet he persisted, and none of the three cops around him stopped him.Chauvin has been criminally charged and is under arrest. We'll see where that takes us. But the opportunity was there for the rest of the nation's cops to separate themselves from this "bad apple." Cop defenders ignore what bad apples do to barrels, but we won't. Chauvin is a symptom. He is not the disease.As protests broke out around the nation, law enforcement agencies responded. While a small number attempted to find middle ground with aggrieved citizens, most acted as though they were a law unto themselves in these troubled times.One site got it completely right -- a site that so often offers up hot takes that it is the source of its own meme. Slate, of all places, nailed this call:
We've noted repeatedly how interstate inmate calling service (ICS) companies have a disturbingly cozy relationship with government, striking (technically buying) monopoly deals that let them charge inmate families $14 per minute. Worse, some ICS companies like Securus Technologies have been under fire for helping the government spy on privileged inmate attorney communications, information that was only revealed in 2015 after Securus was hacked. Given the apathy for prison inmates and their families ("Iff'n ya don't like high prices, don't go to prison, son!") reform on this front has been glacial at best.The 2015 Hacker-obtained data featured 70 million records of phone calls (and recordings of the phone calls themselves), placed by prisoners in at least 37 different states over a two-and-a-half year period. Of particular note were the estimated 14,000 recordings of privileged conversations between inmates and their lawyers:
Clearview is currently being sued by the attorney general of Vermont for violating the privacy rights of the state's residents. As the AG's office pointed out in its lawsuit, users of social media services agree to many things when signing up, but the use of their photos and personal information as fodder for facial recognition software sold to government agencies and a variety of private companies isn't one of them.
Online privacy can’t be solved bygiving people new property rights in personal data. That idea isbased on a raft of conceptual errors. But consumers are alreadyexercising property rights, using them to negotiate the trade-offsinvolved in using online commercial products.People meana lot of different things when they say “privacy.”Let’s stipulate that the subject here is control of personalinformation. There are equal or more salient interests and concernssometimes lumped in with privacy. These include the fairness andaccuracy of big institutions’ algorithmic decision-making,concerns with commodification or commercialization of online life,and personal and financial security.Consumers’ use of online serviceswill always have privacy costs and risks. That tension is acompetitive dimension of consumer Internet services that should neverbe “solved.” Why should it be? Some consumers areentirely rational to recognize the commercial and social benefitsthey get from sharing information. Many others don’t want theirinformation out there. The costs and risks are too great in theirpersonal calculi. Services will change over time, of course, andconsumers’ interests will, too. Long live the privacy tension.Online privacy is not an all-or-nothingproposition. People adjust their use of social media and onlineservices based on perceived risks. They select among options, useservices pseudonymously, and curtail and shade what they share. So,to the extent online media and services appear unsafe orirresponsible, they lose business and thus revenue. There is nomarket failure, in the sense usedin economics.Of course, there are failures of thecommon sort all around. People say they care about privacy, but don’tdo much to protect it. Network effects and other economies of scalemake for fewer options in online services and social media, so thereare fewer privacy options, much less bespoke privacy policies. Andcompanies sometimes fail to understand or abide by their privacypolicies.Those privacy policies are contracts.They divide up property rights in personal information very subtly—sosubtly, indeed, that it might be worth reviewing whatproperty is: a bundle of rights to possess, use,subdivide, trade or sell, abandon, destroy, profit, and excludeothers from the things in the world.The typical privacy policy vests theright to possess data with the service provider—a bailment, inlegal terminology. The service provider gets certain rights to usethe data, the right to generate and use non-personal information fromthe data, and so on. But the consumer maintains most rights toexclude others from data about them, which is all-important privacyprotection. That’s subject to certain exceptions, such asresponding to emergencies, protecting the network or service, andcomplying with valid legal processes.When companies violate their privacypromises, they’re at risk from public enforcement actions—fromAttorneys General and the Federal Trade Commission in the UnitedStates, for example—and lawsuits, including class actions.Payouts to consumers aren’t typically great becauseindividualized damages aren’t great. But there are economies ofscale here, too. Paying a little bit to a lot of people is expensive.A solution? Hardly. It’s morelike an ongoing conversation, administered collectively andepisodically through consumption trends, news reporting, publicawareness, consumer advocacy, lawsuits, legislative pressure, andmore. It’s not a satisfactory conversation, but it probablybeats politics and elections for discovering what consumers reallywant in the multi-dimensional tug-of-war among privacy, convenience,low prices, social interaction, security, and more.There is appeal in declaring privacy ahuman right and determining to give people more of it, but privacyitself fits poorly into a fundamental-rights framework. Peopleprotect privacy in the shelter of other rights—common law andconstitutional rights in the United States. They routinely dispensewith privacy in favor of other interests. Privacy is better thoughtof as an economic good. Some people want a lot of it. Some peoplewant less. There are endless varieties and flavors.In contrast to what’s alreadyhappening, most of the discussion about property rights in personaldata assumes that such rights must come from legislative action—aproperty-rights system designed by legal and sociological experts.But experts, advocates, and energetic lawmakers lack the capacity todiscern how things are supposed to come out, especially given ongoingchanges in both technology and consumers’ information wants andneeds.An interesting objection to creatingnew property rights in personal data is that people might continue totrade personal data, as they do now, for other goods such as low- orno-cost services. That complaint—that consumers might get whatthey want—reveals that most proposals to bestow new propertyrights from above are really information regulations in disguise.Were any such proposal implemented, it would contend strongly in themetaphysical contest to be the most intrusive yet impotent regulatoryregime yet devised. Just look at the planned property-rights systemin intellectual property legislation. Highly arguable net benefitscome with a congeries of dangers to many values the Internet holdsdear.The better property rights system isthe one we’ve got. Through it, real consumers are roughly andunsatisfactorily pursuing privacy as they will. They often—butnot always—cede privacy in favor of other things they wantmore, learning the ideal mix of privacy and other goods through trialand error. In the end, the “privacy problem” will no morebe solved than the “price problem,” the “qualityproblem,” or the “features problem.” Consumers willalways want more and better stuff at a lower cost, whether costs aredenominated in dollars, effort, time, or privacy.Jim Harper is a visiting fellow at the American Enterprise Institute and a senior research fellow at the University of Arizona James E. Rogers College of Law.
Warning: this post will contain what we in the business like to call strong language, invective, and violent content. Govern yourself accordingly.Content warning 2: possibly exceedingly long.ONCE UPON A TIME, A MAN GOT FUCKEDLet's start with a story:(Those of you who'd like to read a transcript, rather than watch this powerful performance by Orlando Jones [possibly for "Dear God, I'm still at work" reasons], can do so here.)This is the history of black Americans. For a few hundred years, they weren't even Americans. And even after that -- even after the Civil War -- black Americans spent a hundred years being shunted to different schools, different neighborhoods, different restrooms, different bus seating, different water fountains. They are not us, this land of opportunity repeatedly stated.Integration was forced. It was rarely welcomed. Being black still means being an outsider. Four hundred years of subjugation doesn't just end. This is how the story continues:
Joe Biden had a golden opportunity to actually look Presidential, and stand up for free speech and the 1st Amendment at a moment when our current President is seeking to undermine it with his Executive order that is designed to intimidate social media companies into hosting speech they'd rather not, and scare others off from fact checking his lies. And he blew it. He doubled down on the ridiculous claim that we should "revoke" Section 230.
Businesses, small to big enterprises, depend on data science. It's a field responsible for evaluating and interpreting data, statistics, and trends to help businesses arrive at better decisions and actions. The 2020 All-in-One Data Scientist Mega Bundle will help you learn and master different data processes such as visualization, computing, analysis, and more. Over 12 courses, you will also learn how to use data across different platforms and languages including Python, Apache, Hadoop, R, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
There's kind of a lot going on in America right now -- what with widespread protests about police violence (leading to more police violence), and we're still in the middle of the largest pandemic in a century. You'd think some of those things would be priorities for Congress, but instead, Senate Republicans have decided that now is the time to pushing ahead with helping Hollywood by examining how to make copyright worse. Even the Washington Post is completely perplexed as to how this could possibly be a priority right now.
Two years ago, an investigation by the Associated Press and Princeton computer scientists found that Google services on both Android and Apple routinely continued to track user location data, even when users opted out of such tracking. Even if users paused "Location History," the researchers found that some Google apps still automatically stored time-stamped location data without asking the consumer's consent.Fast forward two years later, and Arizona Attorney General Mark Brnovich has sued Google for violating the Arizona Consumer Fraud Act over the practice. The lawsuit (pdf), filed in Maricopa County Superior Court, is based off of an investigation begun by Brnovich's office back in 2018. Like the aforementioned AP report, the AG found that Google's settings didn't actually do what they claimed they did in regards to ceasing location data tracking:
The only news network further to the right than Fox News has just seen its baseless libel lawsuit against MSNBC host Rachel Maddow dismissed under California's anti-SLAPP law. While Fox occasionally has to acknowledge the real world and employs a few newscasters critical of the President and his policies, One American News Network (OAN/OANN) apparently feels no compunction to address any issues honestly, preferring to curl up in the lap of the leader of the free world.OAN sued after Maddow offered her commentary on a Daily Beast article that said the news network employed a "Kremlin-paid journalist." The journalist, Kristian Rouz, had been working for both OAN and the Kremlin-owned Sputnik, the latter of which was determined to be a participant in Russia's 2016 election interference effort.Maddow's commentary was somewhat hyperbolic, and very critical of OAN and its double-agent journalist. But OAN took particular issue with a single phrase Maddow said during her broadcast. From the decision [PDF]:
We've got a double winner this week, with That One Guy taking first place for both insightful and funny with some thoughts on Trump's social media executive order: